report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
The majority of Americans receive their health coverage through private health insurance, either by purchasing coverage directly or receiving coverage through their employer, and many of those with private coverage are enrolled in plans purchased from state-licensed or regulated carriers. An estimated 173 million nonelderly Americans, 65 percent, received health coverage through private insurance in 2009. The remainder of Americans either received their health coverage through government health insurance, such as Medicare and Medicaid, or were uninsured. In general, those who obtain private health insurance do so in one of three market segments: individual, small-group, and large-group. Policyholders in the individual market purchase private health insurance plans directly from a carrier—not in connection with a group health plan. In 2009 an estimated 17 million nonelderly Americans obtained individual private health insurance coverage. In the small-group market, enrollees generally obtain health insurance coverage through a group health plan offered by a small employer, and in the large-group market, enrollees generally obtain coverage through a group health plan offered by a large employer. In 2009, an estimated 156 million nonelderly Americans obtained private health insurance through employer-based group plans offered by either small or large employers. While most small-group coverage is purchased from state-licensed or regulated plans, most large- group coverage is purchased from employer self-funded plans not subject to state licensing or regulation. However, there are some fully-insured large-group plans, which are subject to state regulation. Premium rates are actuarial estimates of the cost of providing coverage over a period of time to policyholders and enrollees in a health plan. To determine rates for a specific insurance product, carriers estimate future claims costs in connection with the product and then the revenue needed to pay anticipated claims and nonclaims expenses, such as administrative expenses. Premium rates are usually filed as a formula that describes how to calculate a premium for each person or family covered, based on information such as geographic location, underwriting class, coverage and co-payments, age, gender, and number of dependents. The McCarran-Ferguson Act provides states with the authority to regulate the business of insurance, without interference from federal regulation, unless federal law specifically provides otherwise. Therefore, states are primarily responsible for overseeing private health insurance premium rates in the individual and group markets in their states. Through laws and regulations, states establish standards governing health insurance premium rates and define state insurance departments’ authority to enforce these standards. In general, the standards are used to help ensure that premium rates are adequate, not excessive, reasonable in relation to the benefits provided, and not unfairly discriminatory. In overseeing health insurance premium rates, state insurance departments may review rate filings submitted by carriers. A rate filing may include information on premium rates a carrier proposes to establish, as well as documentation justifying the proposed rates, such as actuarial or other assumptions and calculations performed to set the rate. According to the Congressional Research Service (CRS) and others, most states require carriers to submit rate filings to state departments of insurance prior to implementation of new rates or rate changes. The authority of state insurance departments to review rate filings can vary. Some insurance departments have the authority to approve or disapprove all rate filings before they go into effect, while others do not have any authority to approve or disapprove rate filings. Further, in some states, authority to approve or disapprove rate filings varies by market. According to a report published by CRS, in 2010, insurance departments in 19 states were authorized by their state to approve or disapprove proposed premium rates in all markets before they went into effect— known as prior approval authority. Officials in states with prior approval authority may review a carrier’s rate filing using the state’s standards governing health insurance premium rates. In some cases, the state officials may also consider input from the public on the proposed rate, which can be obtained, among other ways, through public hearings or public comment periods. If a proposed rate does not meet a state’s standards, officials in states with prior approval authority can, among other things, deny the proposed rate or request that the carrier submit a new rate filing that addresses the issues that the state identified during its review. If a proposed rate meets a state’s standards, the officials may approve the rate filing. However, in some states, if the officials do not review a proposed rate filing and take action within a specified time period, the carrier’s submitted rate filing is deemed approved under state law. According to CRS, insurance departments in another 10 states were authorized to disapprove rate filings in all markets in 2010, but not to approve rate filings before a carrier could begin using the premium rate or rates proposed in the filing. In 9 of these states, carriers were required to submit rate filings prior to the effective date of the proposed rate—known as file and use authority. In one state, carriers could begin using a new premium rate and then file it with the state—known as use and file authority. In departments with file and use authority or use and file authority, the state officials may review a carrier’s rate filing using the state’s standards governing health insurance rates. If a proposed rate does not meet these standards, the officials can, among other things, deny the proposed premium rate or request that the carrier submit a new rate filing that addresses the issues that the state identified during its review. However, the state officials do not have the authority to approve a rate filing before the proposed premium rate goes into effect, and unless the rate filing has been disapproved, a carrier may begin using the new premium rate as of its effective date. In six states, insurance departments were not authorized to approve or disapprove rate filings in any market in 2010, according to CRS. In three of these states, a carrier was required to submit rate filings for informational purposes only, known as information only authority. In the other three states, carriers were not required to submit rate filings with the states. In addition, in one state, carriers were not required to file rates for approval or disapproval each time the carrier proposed to change premium rates. Instead, carriers were required to file premium rates with the form that was filed when the plan was initially offered on the market— this form includes the language in the insurance contract. This is known as file with form authority. According to CRS, in the remaining 15 states, authority to approve or disapprove rate filings varied by market in 2010. For example, a state insurance department may have prior approval authority in the individual market, but have information only authority in the small-group and large- group markets subject to their regulation. PPACA, signed into law in March 2010, established a role for HHS by requiring the Secretary of HHS to work with states to establish a process for the annual review of unreasonable premium increases. PPACA also established a state grant program to be administered by HHS beginning in fiscal year 2010. HHS has taken steps to work with states to establish a process for reviewing premium rate increases each year. In December 2010, HHS published a proposed rule, and in May 2011, HHS issued a final rule that established a threshold for review of rate increases for the individual and small-group markets and outlined a process by which certain rate increases would be reviewed either by HHS or a state. The final rule also included a process by which HHS would determine if a state’s existing rate review program was effective. HHS would review rates in states determined not to have an effective rate review program; in these instances, HHS would determine if a rate increase over an applicable threshold in the individual and small-group market was unreasonable based on whether it was excessive, unjustified, or unfairly discriminatory. In developing this final rule, HHS worked with states to understand various states’ rate review authorities. HHS has also begun administering a state grant program to enhance states’ existing rate review processes and provide HHS with information on state trends in premium increases in health insurance coverage. PPACA established this 5-year, $250 million state grant program to be administered by HHS, beginning in fiscal year 2010. HHS announced the first cycle of rate review grants in June 2010, awarding $46 million ($1 million per state) to the 46 states that applied for the grants. According to HHS, grant recipients proposed to use this Cycle I grant funding in a number of ways, including seeking additional legislative authority to review premium rate filings, expanding the scope of their reviews, improving the rate review process, and developing and upgrading technology. HHS announced the second cycle of rate review grants in February 2011 with $199 million available in grant funding to states. Through our survey and interviews with state officials, we found that oversight of health insurance premium rates—primarily reviewing and approving or disapproving rate filings submitted by carriers—varied across states in 2010. In addition, the reported outcomes of rate filing reviews varied widely across states in 2010, in particular, the extent to which rate filings were disapproved, withdrawn, or resulted in lower rates than originally proposed. Nearly all—48 out of 50—of the state officials who responded to our survey reported that they reviewed rate filings in 2010. Further, respondents from 30 states—over two-thirds of the states that provided data on the number of rate filings reviewed in 2010—reported that they reviewed at least 95 percent of rate filings received in 2010. Among the survey respondents that reported reviewing less than 95 percent of rate filings in 2010, some reported that a portion of the rate filings were deemed approved without a review because they did not approve or disapprove them within a specified time period. Others reported that they did not review rate filings in certain markets. For example, respondents from 4 of these states reported that they did not review any rate filings received in the large-group market subject to their regulation in 2010. In addition, some respondents that reported reviewing rate filings in 2010 reported that they did not receive rate filings in certain markets. For example, respondents from 9 states—nearly one quarter of the states that provided information by market—reported that they did not receive rate filings in the large-group market in 2010. (See appendix II for more information on the results of our survey.) While our survey responses indicated that most states reviewed most of the rate filings they received in 2010, the responses to our survey also showed that how states reviewed the rate filings varied in 2010. Specifically, the practices reported by state insurance officials varied in terms of (1) the timing of rate filing reviews—whether rate filings were reviewed before or after the rates took effect, (2) the information considered during reviews, and (3) opportunities for consumer involvement in rate reviews. Respondents from 38 states reported that all rate filings they reviewed were reviewed before the rates took effect, while respondents from 8 states reported reviewing at least some rate filings after the rates went into effect. Some of the variation in the timing of rate filing reviews was consistent with differences across states in their reported authorities for state insurance departments to approve or disapprove rate filings. For example, survey respondents from some states reporting prior approval authority—such as Maryland and West Virginia—were among respondents from the 38 states that reported that all rate filings the state reviewed were reviewed before the rates took effect in 2010. Similarly, survey respondents from another state—Utah—reported that at least some rate filings were reviewed after the rates went into effect, because the department had file and use authority and it was not always possible to review rate filings before they went into effect. However, not all variation in states’ practices was consistent with differences in state insurance departments’ authorities to review and approve or disapprove rate filings. For example, survey respondents from California—who indicated that they did not have the authority to approve rate filings before carriers could begin using the rates—reported that all rate filings reviewed in 2010 were reviewed prior to the rates going into effect. According to our survey results and interviews with state insurance department officials, the information considered as a part of the states’ reviews of rate filings varied. For example, as shown in table 1, our survey results indicated that nearly all survey respondents reported reviewing information such as medical trend, a carrier’s rate history, and reasons for rate revisions. In contrast, fewer than half of state survey respondents reported reviewing carrier capital levels compared with states’ minimum requirements or compared with an upper threshold. (See appendix III for more detailed information about carrier capital levels.) Overall, when asked to select from a list of 13 possible types of information considered during rate filing reviews in 2010, 7 respondents reported that they reviewed fewer than 5 of the items that we listed, while 13 respondents reported reviewing more than 10 items. Some survey respondents also reported conducting relatively more comprehensive reviews and analyses of rate filings, while other respondents reported reviewing relatively little information or conducting cursory reviews of the information they received. For example, survey respondents from Texas reported that for all filings reviewed, all assumptions, including the experience underlying the assumptions, were reviewed by department actuaries for reasonableness, while respondents from Pennsylvania and Missouri reported that they did not always perform a detailed review of information provided in rate filings. Respondents from Pennsylvania reported that while they compared data submitted by carriers in rate filings to the carriers’ previous rate filings, the state’s department of insurance did not have adequate capacity to perform a detailed review of all rate filings received from carriers. Respondents from Missouri reported that they looked through the information provided by carriers in rate filings in 2010, but that they did not have the authority to do a more comprehensive review. We also found that the type of information states reported reviewing in 2010 varied by market or product type. For example, officials from Maine told us that they reviewed information such as medical trend and benefits provided when reviewing rate filings in the individual market and under certain circumstances in the small-group market. However, they told us that they conducted a more limited review in the small-group market if the carrier’s rate filing guaranteed a medical loss ratio of at least 78 percent and the plan covered more than 1,000 lives. In another example, Michigan officials reported that, in 2010, they reviewed a number of types of information for health maintenance organization (HMO) rate filings, including rating methods and charts that showed the levels of premium rate increases from the previous year. These officials told us that the state required HMO rates to be “fair, sound, and reasonable” in relation to the services provided, and that HMOs had to provide sufficient data to support this. In contrast, the officials told us that the state’s requirement for commercial carriers in the individual market was to meet a medical loss ratio of 50 to 65 percent, depending on certain characteristics of the insurance products. While state survey respondents reported a range of information that they considered during rate filing reviews, over half of the respondents reported independently verifying at least some of this information. The remaining respondents reported that they did not independently verify any information submitted by carriers in rate filings in 2010. Survey respondents that reported independently verifying information for at least some rates filings in 2010 also reported different ways in which information they received from carriers was independently verified. For example, survey respondents from Rhode Island reported that the standard of independent verification varied depending on the rate filing, and that the steps taken included making independent calculations with submitted rate filing data and comparing these calculations with external sources of data. In another example, respondents from Michigan reported that in 2010 the department of insurance had staff conduct on- site reviews of carrier billing statements in the small and large-group markets in order to verify the information submitted in rate filings. Survey respondents from 14 states reported providing opportunities for consumers to be involved in the oversight of health insurance premium rates in 2010. Our survey results indicated that these consumer opportunities varied and included opportunities to participate in rate review hearings—which allow consumers and others to present evidence for or against rate increases—public comment periods, or on consumer advisory boards. Survey respondents from six states reported conducting rate review hearings in at least one market in 2010 to provide consumers with opportunities to be involved in the oversight of premium rates. (See table 2 for information on reported opportunities for consumer involvement in states’ rate review practices in 2010.) For example, officials from Maine that we interviewed told us that the insurance department held rate hearings for two large carriers in 2010 and that the size of the rate increase and the number of people affected were among the factors considered in determining whether to hold a rate hearing. The officials explained that if there is a hearing, the Maine Bureau of Insurance issues a notice and interested parties, such as the attorney general or consumer organizations, can participate by presenting evidence for or against rate increases. Maine officials said that, before rate review hearings are held, carriers share information about the rate filing, but that additional details identified at a hearing may trigger a request for further information. Maine officials said that after the state reviews all of the information, the state either approves the rate or disapproves the rate with an explanation of what the state would approve. Survey respondents from eight states reported that they provided consumers with opportunities to participate in public comment periods for premium rates in 2010. For example, respondents from Pennsylvania reported that rate filings were posted in the Pennsylvania Bulletin—a publication that provides information on rulemaking in the state—for 30 days for public review and comment. In addition, officials from Maine told us that they did not make decisions on rate filings until consumers had an opportunity to comment on proposed rate changes. These officials added that they are required to wait at least 40 days after carriers notify policyholders of a proposed rate change before making a decision, providing consumers with an opportunity to comment. Survey respondents from six states reported providing consumers with other opportunities to be involved in the oversight process. For example, respondents from two states—Rhode Island and Washington—reported that they provided consumers with opportunities to participate in consumer advisory boards in 2010. In addition, respondents from Texas reported that rate filings were available to consumers upon request and that the Texas Department of Insurance held stakeholder meetings during which consumer representatives participated in discussions about rate review regulations. The outcomes of states’ reviews of premium rates in 2010 also varied. While survey respondents from 36 states reported that at least one rate filing was disapproved, withdrawn, or resulted in a rate lower than originally proposed in 2010, the percentage of rate reviews that resulted in these types of outcomes varied widely among these states. Specifically, survey respondents from 5 of these states—Connecticut, Iowa, New York, North Dakota, and Utah—reported that over 50 percent of the rate filings they reviewed in 2010 were disapproved, withdrawn, or resulted in rates lower than originally proposed, while survey respondents from 13 of these states reported that these outcomes occurred in less than 10 percent of rate reviews. An additional 6 survey respondents reported that they did not have any rate filings that were disapproved, withdrawn, or resulted in lower rates than originally proposed in 2010. (Fig. 1 provides information on the percentage and reported number of rate filings that were disapproved, withdrawn, or resulted in lower rates than originally proposed by state in 2010.) Some of the state survey respondents reported that at least one rate filing was disapproved, withdrawn, or resulted in rates lower than originally proposed in 2010 even though they did not have explicit authority to approve rate filings in 2010. For example, officials from the California Department of Insurance reported that even though the department did not have the authority to approve rate filings and could only disapprove rate filings if they were not compliant with certain state standards, such as compliance with a 70 percent lifetime anticipated loss ratio, the department negotiated with carriers to voluntarily reduce proposed rates in 2010. Survey respondents from California reported that 14 out of 225 rate filings in 2010 were disapproved, withdrawn, or resulted in rates lower than originally proposed. Specifically, officials from the California Department of Insurance told us that they negotiated with carriers to reduce proposed rates by 2 percentage points to 25 percentage points in 2010. These officials also told us that they negotiated with one carrier not to raise rates in 2010 although the carrier had originally proposed a 10-percent average increase in rates. In another example, although survey respondents from Alabama reported that they did not have prior approval authority, they reported that 22 rate filings were disapproved, withdrawn, or resulted in rates lower than originally proposed in 2010. States also varied in the markets in which rates were disapproved, withdrawn, or resulted in rates lower than originally proposed in 2010. For example, survey respondents from nine states—Alaska, Arkansas, Hawaii, Kansas, Kentucky, Maine, Nevada, New Jersey, and North Carolina—reported that while they reviewed rate filings in multiple markets, only reviews for the individual market resulted in rates that were disapproved, withdrawn, or resulted in rates lower than originally proposed. In other states, respondents reported that rate filings in multiple markets resulted in these types of outcomes in 2010. For example, survey respondents from 12 states reported that rate filings in all three markets resulted in these types of outcomes in 2010. Our survey of state insurance department officials found that 41 respondents from states that were awarded Cycle I HHS rate review grants have begun making three types of changes in order to enhance their states’ abilities to oversee health insurance premium rates. Specifically, respondents reported that they have taken steps in order to (1) improve their processes for reviewing premium rates, (2) increase their capacity to oversee premium rates, and (3) obtain additional legislative authority for overseeing premium rates. Improve rate review processes. More than four-fifths of the state survey respondents that reported making changes to their oversight of premium rates reported that they had taken various steps to improve the processes used for reviewing health insurance premium rates. These steps consisted primarily of the following:  Examining existing rate review processes to identify areas for improvement. Twenty-two survey respondents reported taking steps to either review their existing rate review processes or develop new processes. More than two-thirds of these 22 respondents reported that their state contracted with outside actuarial or other consultants to review the states’ rate review processes and make recommendations for improvement. For example, respondents from Louisiana—who, according to officials, previously did not review most premium rate filings because they did not have the authority to approve or disapprove rates—reported that they had contracted with an actuary to help them develop a rate review process. In another example, respondents from North Carolina reported that an outside actuarial firm independently reviewed the department’s health insurance rate review process and recommended ways that the department could improve and enhance its review process. Similarly, respondents from Tennessee reported that they had obtained information from contract actuaries on how to enhance the state’s review of rate filings. In addition, four of these respondents reported taking steps to develop standardized procedures for reviewing rate filings. For example, respondents from Illinois reported that their insurance department is developing protocols for the collection, analysis, and publication of rate filings.  Changing information that carriers are required to submit in rate filings. Thirteen survey respondents reported taking steps to change the rate filing information that carriers are required to submit to the state insurance department in order to improve reviews of rate filings. For example, respondents from Oregon reported that they will require carriers to provide in their rate filings a detailed breakdown of medical costs and how premiums are spent on medical procedures and services. In another example, respondents from Virginia reported that their state is expanding the information required from carriers in rate filing submissions by developing a uniform submission checklist. Incorporating additional data or analyses in rate filing reviews. Eleven survey respondents reported purchasing data or conducting additional data analyses in order to improve the quality of their states’ rate filing reviews. For example, respondents from Ohio reported taking steps to obtain national claims data on health costs which, according to the respondents, would enable the department of insurance to use a separate data source to verify the costs submitted by carriers in their rate filings. In another example, respondents from Virginia reported that their state had begun undertaking detailed analyses of premium trends in the state’s individual and small-group markets. According to the state respondents, these analyses will provide rate reviewers with benchmark industry values for various factors, such as underlying costs and benefit changes, which will help focus rate reviewers’ efforts on the drivers of a given rate increase. The respondents reported that these analyses will also allow reviewers to more easily identify potentially excessive or unreasonable rate increases. Involving consumers in the rate review process. Three survey respondents reported taking steps to increase consumer involvement in the rate review process. For example, respondents from Connecticut reported that the state’s insurance department has posted all rate filings received from carriers on its web site and created an online application that allows consumers to comment on the proposed rates. In another example, respondents from Oregon reported that the state’s insurance department has contracted with a consumer advocacy organization to provide comments on rate filings on a regular basis. Finally, respondents from Nevada reported that the state is taking steps to create a rate hearing process that will allow consumer advocates to represent the interests of consumers at the hearings. Increase capacity to oversee rates. Over two-thirds of the state survey respondents that reported making changes to rate oversight reported that they have begun to make changes to increase their capacity to oversee premium rates. These reported changes consisted primarily of hiring staff or outside actuaries, and improving the information technology systems used to collect and analyze rate filing data. Twenty survey respondents reported hiring additional staff or contracting with external actuaries and consultants to improve capacity in various ways, such as to review rates, coordinate the rate review process or provide administrative support to review staff, and train staff. For example, respondents from Oregon reported hiring staff to perform a comprehensive and timely review of the filings, and to review rate filings for completeness upon receipt. In another example, respondents from West Virginia reported that they used a portion of their HHS grant funding to obtain external actuarial support for reviewing rate filings. In addition, Illinois officials told us that they have taken steps to hire two internal actuaries, as well as other analytical staff to help with the processing of rate filings to help relieve the workload of current office staff. Seventeen respondents reported taking steps to increase their capacity to oversee premium rates by improving information technology and data systems used in the review process. Nine of these respondents reported taking steps to enhance their use of the System for Electronic Rate and Form Filing (SERFF)—a web-based electronic system developed by NAIC for states to collect electronic rate filings from carriers—such as by working with NAIC or by improving their insurance department’s information technology infrastructure to support the use of SERFF. Additionally, some respondents also reported taking steps to make other improvements, such as creating or improving additional databases in order to collect rate filing data and analyze trends in rate filings. For example, respondents from Wisconsin reported that their office contracted with an actuarial firm using HHS grant funds in part to develop a database to standardize, analyze, and monitor rates in the individual and small-group markets, which will enable the office to track historical rate change data and monitor rate changes. In another example, respondents from Illinois reported that they launched a web-based system in February 2011 for carriers to use when reporting rate changes, while continuing to work with NAIC on SERFF improvements with the intention of eventually merging the state’s data system with SERFF. Obtain additional legislative authority. More than a third of state survey respondents that reported making changes to rate oversight reported that their states have taken steps—such as introducing or passing legislation—in order to obtain additional legislative authority for overseeing health insurance premium rates. For example, respondents from Montana reported that legislation has been introduced that would give the state the authority to require carriers to submit rate filings for review. In another example, Illinois officials told us that the state has authority to require some carriers to submit rate filings, but the state does not have the authority to approve these filings before the rates take effect. The officials told us that legislation has been introduced to obtain prior approval authority. Additionally, respondents from North Carolina reported that the department has sought additional prior approval authority over small-group health insurance rates in addition to its existing prior approval authority over rates in the individual, small-group, and large-group health insurance markets. Finally, some states reported taking steps to review their current authority to determine if changes were necessary. HHS provided us with written comments on a draft version of this report. These comments are reprinted in appendix IV. HHS and NAIC also provided technical comments, which we incorporated as appropriate. In its written comments, HHS noted that health insurance premiums have doubled on average over the last 10 years, putting coverage out of reach for many Americans. Further, HHS noted that as recently as the end of 2010, fewer than half of the states and territories had the legal authority to reject a proposed increase if the increase was excessive, lacked justification, or failed to meet other state standards. In its written comments, HHS also noted the steps it is taking to improve transparency, help states improve their health insurance rate review, and assure consumers that any premium increases are being spent on medical care. Specifically, HHS noted its requirement that, starting in September 2011, certain insurers seeking rate increases of 10 percent or more in the individual and small-group markets publicly disclose the proposed increases and their justification for them. According to HHS, this requirement will help promote competition, encourage insurers to work towards controlling health care costs, and discourage insurers from charging unjustified premiums. In its comments, HHS also discussed the state grant program provided for by PPACA to help states improve their health insurance rate review. As our report notes, in addition to grants awarded in 2010, HHS announced in February 2011 that nearly $200 million in additional grant funds were available to help states establish an effective rate review program. Finally, the comments from HHS point out that their rate review regulation will work in conjunction with their medical loss ratio regulation released on November 22, 2010, which is intended to ensure that premiums are being spent on health care and quality-related costs, not administrative costs and executive salaries. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator for Medicare & Medicaid Services, and other interested parties. In addition, the report will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our objectives were to describe (1) states’ practices for overseeing health insurance premium rates in 2010, including the outcomes of premium rate reviews, and (2) changes that states that received Department of Health and Human Services (HHS) rate review grants have begun making to enhance their oversight of health insurance premium rates. To describe states’ practices for overseeing health insurance premium rates in 2010, including the outcomes of rate reviews, we analyzed data from our web-based survey sent to officials of the insurance departments of all 50 states and the District of Columbia (collectively referred to as “states”). We obtained the names, titles, phone numbers, and e-mail addresses of our state insurance department survey contacts by calling each insurance department and asking for the most appropriate contact. The survey primarily contained questions on state practices for overseeing rates during calendar year 2010, such as the number of filings received, reviewed, and outcomes of review, the timing of state review, factors considered during review, independent verification of carrier data, consumer involvement, and capacity and resources to review rates. During the development of our survey, we pretested it with insurance department officials from three states—Michigan, Tennessee, and West Virginia—to ensure that our questions and response choices were clear, appropriate, and answerable. We made changes to the content of the questionnaire based on their feedback. We conducted the survey from February 25, 2011, through April 4, 2011. Of the 51 state insurance departments, 50 completed the survey. However, not all states responded to each question in the survey. Additionally, some survey respondents reported that they did not have data that could be sorted by health insurance market. See appendix II for the complete results of the survey. Because we sent the survey of state insurance departments to the complete universe of potential respondents, it was not subject to sampling error. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question was interpreted, in the sources of information that were available to respondents, or in how the data were entered into a database or were analyzed could introduce unwanted variability into the survey results. We encountered instances of nonsampling survey error in analyzing the survey responses. Specifically, in some instances, respondents provided conflicting, vague, or incomplete information. We generally addressed these errors by contacting the state insurance department officials involved and clarifying their responses. However, we did not independently verify the information and data provided by the state survey respondents. To obtain more in-depth information on states’ practices for overseeing rates in calendar year 2010, we interviewed state insurance department officials from a judgmental sample of five states: California, Illinois, Maine, Michigan, and Texas. To ensure that we identified a range of states for our in-depth interviews, we considered state insurance departments’ authorities in 2010 for reviewing health insurance premium rates, as reported by the National Association of Insurance Commissioners (NAIC); states’ plans to change their premium rate oversight practices, as described in their Cycle I rate review grant applications to HHS submitted in June and July of 2010; states’ population sizes; and states’ geographic locations. These criteria allowed us, in our view, to obtain information from insurance departments in a diverse mix of states, but the findings from our in-depth interviews cannot be generalized to all states because the states selected were part of a judgmental sample. We used information obtained during these interviews throughout this report. To describe changes that states have begun making to enhance their oversight of premium rates, we relied primarily on data collected in our state insurance department survey, in which we asked respondents to describe through open-ended responses steps taken to implement the changes to premium rate oversight that were proposed in states’ Cycle I rate review grant applications to HHS. We then performed a content analysis of these open-ended responses through the following process: From a preliminary analysis of the survey responses, we identified a total of 13 types of state changes such as hiring staff or consultants to review rates, involving consumers in the rate oversight process, and improving information technology. We then grouped those types of changes reported by survey respondents into three categories of reported changes. Two GAO analysts independently assigned codes to each response, and if respondents provided conflicting or vague information, we addressed these errors by contacting the state insurance department officials involved and clarifying their responses; however, we did not independently verify the information provided in the survey responses. To gain further information on state changes to rate oversight practices, we also asked about changes during our in-depth interviews with insurance department officials in five states described above. In addition, we interviewed officials from the Center for Consumer Information and Insurance Oversight within the Centers for Medicare & Medicaid Services, and reviewed portions of the states’ Cycle I rate review grant applications submitted to HHS and other relevant HHS documents. To gather additional information related to both of our research objectives, we interviewed a range of experts and organizations including NAIC, the American Academy of Actuaries, America’s Health Insurance Plans, two large carriers based on their number of covered lives, NAIC consumer representatives (individuals who represent consumer interests at meetings with NAIC), and various advocacy groups such as Families USA and Consumers Union. We conducted this performance audit from September 2010 through June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix presents additional results from our survey of insurance department officials in all 50 states and the District of Columbia on their oversight of health insurance premium rates in 2010, and changes they have begun to make to enhance their oversight of health insurance premium rates. Table 3 presents survey responses by state on the number of rate filings that were received, reviewed, and disapproved, withdrawn, or resulted in rates lower than originally proposed in the individual, small-group, and large-group markets in 2010. Table 4 presents the number of survey respondents that reported that the state insurance department required actuarial justification for rate filings, and whether the justifications were reviewed by an actuary in 2010 in the individual, small-group, and large-group markets. Table 5 presents survey responses on states’ capacity and resources to review rate filings in 2010. Table 6 presents information on the types of changes that survey respondents that had been awarded HHS Cycle I rate review grants reported making to enhance their oversight of health insurance premium rates. State officials monitor carriers’ capital levels to help ensure that carriers can meet their financial obligations. State officials’ primary objective when monitoring capital levels has been to ensure the adequacy of carriers’ capital to make sure that consumers and health care providers are not left with unpaid claims. The focus, therefore, has been on monitoring capital levels to ensure that they exceed minimum requirements. Officials from some states have noted that they review this information when reviewing rate filings. NAIC developed a formula and model law for states to use in determining and regulating the adequacy of carriers’ capital. The risk-based capital (RBC) formula generates the minimum amount of capital that a carrier is required to maintain to avoid regulatory action by the state. The formula takes into account, among other things, the risk of medical expenses exceeding the premiums collected. According to NAIC, 37 states had adopted legislation or regulations based on NAIC’s Risk-Based Capital (RBC) for Health Organizations Model Act as of July 2010 in order to monitor carriers’ capital. However, an NAIC official told us that all states must follow the RBC model act in order to meet NAIC accreditation standards. Under NAIC’s model law, the baseline level at which a state may take regulatory action against a carrier is the authorized control level. If a carrier’s total adjusted capital—which includes shareholders’ funds and adjustments on equity, asset values, and reserves—dips below its authorized control level, the state insurance regulator can place the carrier under regulatory control. The RBC ratio is the ratio of the carrier’s total adjusted capital to its authorized control level; state officials become involved when the ratio drops below 200 percent. If the RBC ratio is 200 percent or more, no action is required. As shown in table 7 below, NAIC data show that, from 2005 through 2010, except for carriers with less than $10 million in assets, carriers’ median RBC ratios were generally higher for carriers reporting greater assets. In addition to the contact named above, Kristi Peterson, Assistant Director; George Bogart; Kelly DeMots; Krister Friday; Linda Galib; and Peter Mangano made key contributions to this report.
With premiums increasing for private health insurance, questions have been raised about the extent to which increases are justified. Oversight of the private health insurance industry is primarily the responsibility of states. In 2010, the Patient Protection and Affordable Care Act required the Department of Health and Human Services (HHS) to award grants to assist states in their oversight of premium rates. GAO was asked to provide information on state oversight of premium rates. In this report, GAO describes (1) states' practices for overseeing health insurance premium rates in 2010, including the outcomes of premium rate reviews; and (2) changes that states that received HHS rate review grants have begun making to enhance their oversight of premium rates. GAO surveyed officials from insurance departments in 50 states and the District of Columbia (referred to as states) about their practices for overseeing premium rates in 2010 and changes they have begun making to enhance their oversight. GAO received responses from all but one state. GAO also interviewed officials from California, Illinois, Maine, Michigan, and Texas to gather additional information on state practices. GAO selected these states based on differences in their authority to oversee premium rates, and proposed changes to their oversight, their size, and their geographic location. GAO also interviewed officials from advocacy groups and two large carriers to obtain contextual information. GAO found that oversight of health insurance premium rates--primarily reviewing and approving or disapproving rate filings submitted by carriers--varied across states in 2010. While nearly all--48 out of 50--of the state officials who responded to GAO's survey reported that they reviewed rate filings in 2010, the practices reported by state insurance officials varied in terms of the timing of rate filing reviews, the information considered in reviews, and opportunities for consumer involvement in rate reviews. Specifically, respondents from 38 states reported that all rate filings reviewed were reviewed before the rates took effect, while other respondents reported reviewing at least some rate filings after they went into effect. Survey respondents also varied in the types of information they reported reviewing. While nearly all survey respondents reported reviewing information such as trends in medical costs and services, fewer than half of respondents reported reviewing carrier capital levels compared with state minimums. Some survey respondents also reported conducting comprehensive reviews of rate filings, while others reported reviewing little information or conducting cursory reviews. In addition, while 14 survey respondents reported providing consumers with opportunities to be involved in premium rate oversight, such as participation in rate review hearings or public comment periods, most did not. Finally, the outcomes of states' reviews of rate filings varied across states in 2010. Specifically, survey respondents from 5 states reported that over 50 percent of the rate filings they reviewed in 2010 were disapproved, withdrawn, or resulted in rates lower than originally proposed, while survey respondents from 19 states reported that these outcomes occurred from their rate reviews less than 10 percent of the time. GAO's survey of state insurance department officials found that 41 respondents from states that were awarded HHS rate review grants reported that they have begun making changes in order to enhance their states' abilities to oversee health insurance premium rates. For example, about half of these respondents reported taking steps to either review their existing rate review processes or develop new processes. In addition, over two-thirds reported that they have begun to make changes to increase their capacity to oversee premium rates, including hiring staff or outside actuaries, and improving the information technology systems used to collect and analyze rate filing data. Finally, more than a third reported that their states have taken steps--such as introducing or passing legislation--in order to obtain additional legislative authority for overseeing health insurance premium rates. HHS and the National Association of Insurance Commissioners (NAIC) reviewed a draft of this report. In its written comments, HHS highlighted the steps it is taking to improve transparency, help states improve their health insurance rate review, and assure consumers that any premium increases are being spent on medical care. HHS and NAIC provided technical comments, which were incorporated as appropriate.
White supremacists, anti-government extremists, radical Islamist extremists, and other ideologically inspired domestic violent extremists have been active in the United States for decades. Examples of attacks include the 1993 World Trade Center bombing by radical Islamists, in which 6 persons were killed; and the 1995 Oklahoma City bombing of the Alfred P. Murrah federal building by anti-government far right individuals, in which 168 lives were lost. The September 11, 2001, attacks account for the largest number of fatalities in the United States in a single or closely- related attack resulting from violent extremism in recent decades. While the September 11, 2001, attacks were perpetrated by foreign violent extremists, from September 12, 2001 through December 31, 2016, attacks by domestic or “homegrown” violent extremists in the United States resulted in 225 fatalities, according to the ECDB. Of these, 106 were killed by far right violent extremists in 62 separate incidents, and 119 were victims of radical Islamist violent extremists in 23 separate incidents. Figure 1 shows the locations and number of fatalities involved in these incidents. A detailed list of the incidents can be found in appendix II. According to the ECDB, activities of far left wing violent extremist groups did not result in any fatalities during this period. Since September 12, 2001, the number of fatalities caused by domestic violent extremists has ranged from 1 to 49 in a given year. As shown in figure 2, fatalities resulting from attacks by far right wing violet extremists have exceeded those caused by radical Islamist violent extremists in 10 of the 15 years, and were the same in 3 of the years since September 12, 2001. Of the 85 violent extremist incidents that resulted in death since September 12, 2001, far right wing violent extremist groups were responsible for 62 (73 percent) while radical Islamist violent extremists were responsible for 23 (27 percent). The total number of fatalities is about the same for far right wing violent extremists and radical Islamist violent extremists over the approximately 15-year period (106 and 119, respectively). However, 41 percent of the deaths attributable to radical Islamist violent extremists occurred in a single event—an attack at an Orlando, Florida night club in 2016 (see fig. 2). Details on the locations and dates of the attacks can be found in appendix II. In October 2016, the federal government defined the U.S. approach to countering violent extremism as proactive actions to counter efforts by extremists to recruit, radicalize, and mobilize followers to violence. The three parts of the U.S. approach to CVE efforts are: (1) empowering communities and civil society; (2) messaging and counter–messaging; and (3) addressing causes and driving factors. CVE activities are different from traditional counterterrorism efforts, such as collecting intelligence, gathering evidence, making arrests, and responding to incidents, in that they generally focus on preventing an individual from finding or acting out on a motive for committing a crime, as shown in figure 3. In February 2015, the White House released a fact sheet stating that CVE encompasses the preventative aspects of counterterrorism as well as interventions to undermine the attraction of violent extremist movements and ideologies that seek to promote violence. According to the national strategy, CVE actions intend to address the conditions and reduce the factors that most likely contribute to recruitment and radicalization by violent extremists. CVE efforts, as defined by the White House, are not to include gathering intelligence or performing investigations for the purpose of criminal prosecution. CVE efforts aim to address the root causes of violent extremism through community engagement, including: Building awareness—through briefings on the drivers and indicators of radicalization and recruitment to violence. For example, U.S. Attorney’s and DHS offices host community outreach meetings in which they provide information on identifying suspicious activity. Countering violent extremist narratives—directly addressing and countering violent extremist recruitment messages, such as encouraging alternative messages from community groups online. For example, DOJ partnered with the International Association of Chiefs of Police to produce awareness briefs on countering online radicalization. Emphasizing community led intervention—supporting community efforts to disrupt the radicalization process before an individual engages in criminal activity. For example, the FBI aims to provide tools and resources to communities to help them identify social workers and mental health professionals who can help support at-risk individuals and prevent them from becoming radicalized. Recognizing that most CVE activities occur at the community level, DHS and DOJ officials leading the CVE Task Force describe the federal role in CVE as a combination of providing research funding and training materials, and educating the public through activities such as DHS or DOJ hosted community briefings in which specific threats and warning signs of violent extremism are shared. According to FBI officials, these outreach efforts also provide an opportunity to build relationships in the community and help clarify the FBI’s role in engaging community organizations. According to DHS officials, DHS also conducts regular community engagement roundtables in multiple cities that provide a forum for communities to comment on and hear information about Department activities, including CVE. In addition to community meetings, education of the public is to occur through a multiplicity of outreach channels, including websites, social media, conferences, and communications to state and local governments, including law enforcement entities. Since 2010, federal agencies have initiated several steps towards countering violent extremism. In November 2010, a National Engagement Task Force, led by DHS and DOJ, was established to help coordinate community engagement efforts to counter violent extremism. The task force was to include all departments and agencies involved in relevant community engagement efforts and focus on compiling local, national, and international best practices and disseminating these out to the field, especially to U.S. Attorneys’ Offices. The task force was also responsible for connecting field-based federal components involved in community engagement to maximize partnerships, coordination, and resource- sharing. According to DHS officials, the National Engagement Task Force disbanded in 2013. In September 2015, DHS recognized that its CVE efforts were scattered across a number of components and lacked specific goals and tangible measures of success. DHS created the Office of Community Partnerships (OCP) to consolidate its programs, foster greater involvement of the technology sector and philanthropic efforts to support private CVE efforts, and to enhance DHS grant-making in the area. At the same time, federal agencies involved in CVE recognized that the CVE landscape had changed since the issuance of the national strategy and SIP in late 2011. According to DHS and DOJ officials, ISIS had emerged as a threat, and an increase in internet recruiting by violent extremist groups since 2011 required an update to the SIP. In 2015, NCTC led a review to ensure that the federal government was optimally organized to carry out the CVE mission. According to DOJ and DHS officials leading CVE activities, the review validated the objectives of the 2011 strategy, but identified gaps in its implementation. Specifically, representatives from 10 departments and agencies contributing to CVE efforts identified four needs: infrastructure to coordinate and prioritize CVE activities across the federal government and with stakeholders; clear responsibility, accountability, and communication internally and with the public; broad participation of departments and agencies outside national security lanes; and a process to assess, prioritize, and allocate resources to maximize impact. In response, in January 2016, a new CVE task force was created to coordinate government efforts and partnerships to prevent violent extremism in the United States. The CVE Task Force is a permanent interagency task force hosted by DHS with overall leadership provided by DHS and DOJ. Staffing is to be provided by representatives from DHS, DOJ, FBI, NCTC, and other supporting departments and agencies. The Task Force is administratively housed at DHS and is to rotate leadership between DHS and DOJ bi-annually. The interagency CVE Task Force was established to: (1) synchronize and integrate whole-of-government CVE programs and activities; (2) conduct ongoing strategic planning; and (3) assess and evaluate CVE efforts. In October 2016, the Task Force, through the White House, issued an updated SIP for the 2011 national strategy. The 2016 SIP outlines the general lines of effort that partnering agencies will aim to undertake to guide their coordination of federal efforts and implement the national strategy. These lines of effort include: Research and Analysis: The Task Force is to coordinate federal support for ongoing and future CVE research. Since 2011, DHS has funded 98 CVE related research projects and DOJ has funded 25. Coordination through this line of effort aims to prevent overlap and duplication while identifying guidelines for future evaluations. This line of effort also aims to identify and share guidelines for designing, implementing, and evaluating CVE programs. Engagements and Technical Assistance: The Task Force is to coordinate federal outreach to and engagement with communities. DHS, FBI, U.S. Attorneys, and other departments regularly provide information to local community and law enforcement leaders. To date, much of the information provided has been from the individual perspective of each agency and its mission rather than a coordinated CVE mission. This line of effort aims to coordinate these outreach efforts to synchronize the messages that are reaching the communities. Interventions: This line of effort aims to develop intervention options to include alternative pathways or “off-ramps” for individuals who appear to be moving toward violent action but who have not yet engaged in criminal activity. Law enforcement officials are looking for ways to support community led programs, particularly when they focus on juveniles and others who have the potential to be redirected away from violence. The CVE Task Force, in coordination with DOJ and the FBI, aim to support local multidisciplinary intervention approaches. Communications and Digital Strategy: Recognizing that general CVE information and resources are not easily accessible by stakeholders, the CVE Task Force aims to create a new online platform, including a public website, to ensure stakeholders around the country are able to quickly and easily understand national CVE efforts. This platform aims to serve as the national digital CVE clearinghouse by centralizing and streamlining access to training; research, analysis, and lessons learned; financial resources and grant information; networks and communities of interest; and intervention resources. According to the 2016 SIP, the lines of effort were developed to align with the three priority action areas outlined in the 2011 national strategy and SIP: (1) enhancing engagement with and support to local communities; (2) building government and law enforcement expertise for preventing violent extremism; and (3) countering violent extremist propaganda while promoting our ideals. Also in October 2016, DHS issued its own strategy outlining the specific actions it aims to take to meet its CVE mission. Figure 4 shows a timeline of federal CVE milestones and activities. Consistent with direction in the 2011 National Strategy, federal CVE efforts have generally been initiated by leveraging existing programs and without a specific CVE budget. For example, activities that address violence in schools or hate crimes in communities may be relevant to constraining or averting violent extremism, but receive funding as part of a different program. In fiscal year 2016, the DHS Office of Community Partnerships operated with a $3.1 million budget and focused on raising awareness of violent extremists’ threats in communities, building relationships with community organizations that are conducting CVE efforts, and coordinating CVE efforts within DHS. Additionally, DHS’s fiscal year 2016 appropriation included $50 million to address emergent threats from violent extremism and from complex, coordinated terrorist attacks. Of the $50 million, DHS awarded $10 million through a competitive grant program DHS designated $1 million for a Joint Counterterrorism Workshop; DHS designated the remaining $39 million to be competitively awarded under the existing Homeland Security Grant Program. Developed to help execute the 2011 National Strategy for Empowering Local Partners to Prevent Violent Extremism in the United States, the 2011 SIP detailed federal agency roles and responsibilities for current and future CVE efforts. The SIP outlined 44 tasks to address CVE domestically and called for the creation of an Assessment Working Group to measure CVE’s progress and effectiveness. From our analysis of agency documentation and other evidence as to whether tasks had been implemented, we determined that agencies implemented almost half of the 44 domestically-focused tasks identified in the 2011 SIP. Specifically, from December 2011 through December 2016, federal agencies implemented 19 tasks, had 23 tasks in progress, and had not yet taken action on 2 tasks (see fig. 5 below and app. III for additional details). While progress was made in implementing the tasks, the Assessment Working Group was never formed according to DHS and DOJ officials responsible for implementing the SIP. Moreover, as of December 2016, there had been no comprehensive assessment of the federal government’s CVE efforts’ effectiveness. The 44 domestically-oriented tasks identified in the 2011 SIP were focused on addressing three core CVE objectives: community outreach, research and training, and capacity building. Below is a description of progress made and challenges remaining by core CVE objective. Community outreach aims to enhance federal engagement and support to local communities that may be targeted by violent extremism. For example, community outreach might include expanding relationships with local business and communities to identify or prevent violent extremism or integrating CVE activities into community-oriented policing efforts. Of the 17 community outreach tasks in the SIP, we determined that agencies implemented 8 tasks and 9 remain in progress. In general, agencies implemented tasks focused on expanding CVE efforts in local communities and identifying ways to increase funding for CVE activities, among other things. For example, DOJ expanded CVE activities to communities targeted by violent extremism through a series of outreach meetings led by the U.S. Attorney’s offices. Further, both DHS and DOJ identified funding within existing appropriations to incorporate CVE into eligible public safety and community resilience grants. However, community outreach tasks that remained in progress include tasks related to reaching communities in the digital environment. For example, DHS aims to build relationships with the high-tech and social media industry and continues to meet with officials to discuss how to address violent extremism online. In providing a status update on such activities, DHS recognized this as an area that continues to need attention. Research and training relates to understanding the threat of violent extremism, sharing information, and leveraging it to train government and law enforcement officials. For example, activities under research and training might include funding or conducting analysis on CVE-related topics or developing training curriculums for CVE stakeholders. Of the 19 research and training tasks we assessed in the SIP, we determined that agencies implemented 9 tasks, had 9 tasks in progress, and had not yet taken action on 1 task. Agencies implemented activities related to continuing research on CVE and integrating CVE training into federal law enforcement training, among other things. For example, DHS, through its Science and Technology Directorate, continued its research and reporting on violent extremist root causes and funded an open source database on terrorism as stated in the SIP. DHS also implemented a task related to integrating CVE content into counter-terrorism training conducted at the Federal Law Enforcement Training Center. Additionally, NCTC implemented tasks related to expanding awareness briefings to state and local law enforcement, and developing and reviewing guidance on CVE training, while the FBI implemented a task regarding the completion of a CVE coordination office. Further, tasks related to training non-security federal partners to incorporate CVE training remain in progress. For example, DHS was given responsibility for collaborating with non-security federal partners to build CVE training modules that can be incorporated into existing programs related to public safety, violence prevention, and resilience. DHS acknowledged this task needs attention and noted that, while initial steps were taken, the interagency effort needs to better define roles and opportunities for future collaborations. However, agencies have not yet taken action on implementing CVE in federal prisons. Capacity building tasks relate to investments of resources into communities to enhance the effectiveness and future sustainability of their CVE efforts. Capacity building might, for example, include expanding the use of informational briefings to a wider audience or outreach to former violent extremists to counter violent narratives. Of the 8 capacity building tasks we assessed in the SIP, we determined that agencies implemented 2 tasks, 5 tasks were in progress, and action had not yet been taken on 1 task. For example, one of the implemented capacity building tasks included providing regular briefings on CVE to Congress and others. In implementing this task, DHS participated in over two dozen briefings and hearings for Congress. Capacity building tasks that were in progress included brokering connections with the private sector and building a public website on community resilience and CVE, among others. DHS had, for example, taken steps to broker connections with the private sector. DHS officials also noted making initial progress with YouTube and the Los Angeles Police Department in developing campaigns against violent extremism, but recognized this as an area that continues to need attention. Despite progress in 7 of 8 capacity building tasks, action had not yet been taken on a task related to learning from former violent extremists to directly challenge violent extremist narratives. According to DHS officials, legal issues regarding access to former violent extremists are being explored and DOJ will lead this task moving forward. Although we were able to determine the status of the 44 domestically focused CVE tasks from the 2011 SIP, we could not determine the extent to which the United States is better off today as a result of its CVE effort than it was in 2011. That is because no cohesive strategy with measurable outcomes has been established to guide the multi-agency CVE effort towards its goals. Neither the 2011 SIP nor its 2016 update provides a cohesive strategy— one that sets forth a coordinated and collaborative effort among partner agencies—that includes measurable outcomes. For example, the 2016 SIP includes a task on strengthening collaboration with the private sector and academia to pursue CVE-relevant communications tools and capabilities. The task describes the benefits of such collaboration, but does not include any information on how the task will be implemented, timeframes for implementation, desired outcomes, or indicators for measuring progress towards those outcomes. Similarly, the 2016 SIP includes a task on identifying and supporting the development of disengagement and rehabilitation programs. While the SIP describes research conducted in partnership with one such program that provides pathways out of violent extremism, it does not include any information on how the federal government will identify other groups and what kind of support they might provide. Absent defined measureable outcomes, it is unclear how these tasks will be implemented and how they will measurably contribute to achieving the federal CVE goals. Consistent with the GPRA Modernization Act of 2010, establishing a cohesive strategy that includes measurable outcomes can provide agencies with a clear direction for successful implementation of activities in multi-agency cross-cutting efforts. Participants in multi-agency efforts each bring different views, organizational cultures, missions, and ways of operating. They may even disagree on the nature of the problem or issue being addressed. As such, developing a mutually agreed-upon cohesive strategy with measureable outcomes can strengthen agencies’ commitment to working collaboratively and enhance the effectiveness of the CVE effort while keeping stakeholders engaged and invested. Absent a cohesive strategy with defined measureable outcomes, CVE partner agencies have been left to develop and take their own individual actions without a clear understanding of whether and to what extent their actions will reduce violent extremism in the United States. For example, agencies such as the Department of Education and the Department of Health and Human Services are listed in the SIP as two of the agencies with responsibility for implementing the 2016 SIP. However, the tasks for which they are listed as partners do not include measurable outcomes to guide implementation. As another example, in 2016 DHS issued its own CVE strategy for the department intended to align with the 2016 SIP. It is specific to DHS components and programs, establishes goals, outcomes, and milestones, and states that DHS will assess progress. However, DHS’s CVE strategy does not demonstrate how these activities will integrate with the overall federal CVE effort. Further, it establishes goals and outcomes for only one of the many departments responsible for CVE. DHS and DOJ officials speaking on behalf of the CVE Task Force stated that, as of November 2016, they had not determined if other stakeholder agencies, such as DOJ, the Department of Education, or the Department of Health and Human Services, would be developing similar strategies. In January 2016, the CVE Task Force was established as the multi- agency body charged with coordinating government efforts and partnerships to prevent violent extremism in the United States. As such, it is best positioned to work with federal stakeholders in developing a cohesive strategy with measureable outcomes. More details on the CVE Task Force are provided in the following section. Our previous work has shown that agencies across the federal government have benefited from applying such strategies to cross cutting programs. By developing a cohesive strategy with measurable outcomes, CVE stakeholders will be better able to guide their efforts to ensure measurable progress is made in CVE. The CVE Task Force has not established a process for assessing whether the federal government’s CVE efforts are working. Establishing a process for assessing progress is a consistent practice of successful multi-agency collaborative efforts we have previously reviewed. Moreover, such assessments can help identify successful implementation and gaps across agencies. Recognizing the need for assessing the effects of CVE activities, the 2011 SIP described a process in which departments and agencies were to be responsible for assessing their specific activities in coordination with an Assessment Working Group. Agencies were to develop a process for identifying gaps, areas of limited progress, resource needs, and any additional factors resulting from new information on the dynamics of radicalization to violence. Further, the progress of the participating agencies was to be evaluated and reported annually to the President. However, according to DHS and DOJ officials, the Assessment Working Group was never created and the process described in the SIP was not developed. As a result, no process or method for assessing the federal CVE effort’s progress and holding stakeholders accountable was established. Absent a mechanism for assessing the federal CVE effort, in 2015 NCTC, along with 10 federal agencies, including DHS and DOJ, undertook an effort to review progress agencies had made in implementing their CVE responsibilities. According to DHS and DOJ officials, the review, along with those of the supporting agencies, helped identify areas for continued focus and improvement in fulfilling the CVE effort. Specifically, the review team identified the need for clear responsibility and accountability across the government and with the public. It also identified the need for a process to assess, prioritize, and allocate resources to maximize impact, among other needs. Informed by these efforts, in January 2016 the CVE Task Force was established as a permanent interagency task force with overall leadership provided by DHS and DOJ. As previously described, the task force was charged with coordinating government efforts and partnerships to prevent violent extremism in the United States. Moreover, the CVE Task Force was assigned responsibility for synchronizing and integrating CVE programs and activities and assessing and evaluating them. The CVE Task Force worked with its partner agencies to develop the 2016 SIP but did not identify a process or method for assessing whether the overall CVE effort is working. Instead, the SIP states that it will use prior evaluations of individual programs to develop guidelines for departments and agencies to evaluate their own programs. Moreover, according to CVE Task Force officials, they do not believe that assessing the overall effectiveness of the federal CVE effort is their responsibility. Moving forward with the approach identified in the 2016 SIP is likely to continue to limit the federal government’s understanding of progress made in CVE efforts to that of individual activities rather than the entirety of the federal CVE effort. Agencies have conducted assessments of the effectiveness of some individual CVE programs. However, those assessments do not address the overarching effectiveness of the CVE effort. In addition, efforts to evaluate individual CVE initiatives alone will not provide an overall assessment of progress made in the federal CVE effort. For example, DOJ funded an evaluation of a community-based CVE programming effort led by the World Organization for Resource Development and Education (WORDE). The evaluation assessed WORDE’s effectiveness in promoting positive social integration and encouraging public safety in Montgomery County, Maryland. The evaluation looked at community-based participation in CVE programs, community awareness of risk factors of radicalization to violent extremism, and the community’s natural inclinations in response to these factors. The evaluation provides some insights into how WORDE’s program worked in Montgomery County, Maryland, but not the overall federal CVE effort. Absent a consistent process for assessing the federal CVE effort as a whole, the federal government lacks the information needed to truly assess the extent to which the WORDE effort and others have countered violent extremism. Further, stakeholders will be limited in their efforts to identify successes and gaps and allocate or leverage resources effectively. Given that the CVE Task Force, as a permanent interagency body, is charged with synchronizing and integrating CVE programs and activities and assessing and evaluating them, the CVE Task Force should establish a process for assessing overall progress in CVE, including its effectiveness. Combatting violent extremism is of critical importance for the United States. Extremist attacks of all kinds can have perilous effects on the perceived safety of our nation. It is therefore imperative that the United States employ effective means for preventing and deterring violent extremism and related attacks. To help confront this critical need, in 2011 the President issued a CVE strategy and corresponding implementation plan. However, over 5 years have passed and the federal government has not developed a cohesive strategy among stakeholder agencies that provides measurable outcomes to guide the collaborative implementation of CVE activities. While the CVE Task Force provided a forum for coordination and led the effort to develop a new SIP, the plan does not provide stakeholder agencies with specific direction and measures to identify successes and gaps in the implementation of their activities. In the absence of a cohesive strategy, DHS has developed its own strategy, while no such roadmap is in place for the collaborative implementation of activities by all stakeholder agencies. As the entity responsible for the synchronization and integration of CVE programs across the government, the CVE Task Force is well positioned to develop a cohesive strategy that provides all stakeholder agencies with a clear path forward in achieving the federal CVE effort’s desired outcomes. The CVE Task Force, established in part to assess and evaluate CVE programs, has also not established an approach for assessing overall progress. Without consistent measures and methodologies for evaluating CVE as a whole, the federal government lacks the necessary information needed to assess the extent to which stakeholder agencies are achieving their goals. Without this information, stakeholders will not be able to identify successes and gaps and allocate or leverage resources effectively. When dealing with programs and activities that are designed to keep Americans safe from the threat of violent extremism, agency leaders and policy makers need to be able to know how well the federal government is doing in implementing these activities. Establishing an approach for assessing progress of the overall CVE effort can help the CVE Task Force enhance understanding of progress made as a result of CVE. To help identify what domestic CVE efforts are to achieve and the extent to which investments in CVE result in measureable success, the Secretary of Homeland Security and the Attorney General—as heads of the two lead agencies responsible for coordinating CVE efforts—should direct the CVE Task Force to: 1. Develop a cohesive strategy that includes measurable outcomes for 2. Establish and implement a process to assess overall progress in CVE, including its effectiveness. We provided a draft of this report to the Departments of Education, Health and Human Services, Homeland Security (DHS), and Justice (DOJ) and the Office of the Director of National Intelligence (ODNI). In its written comments, reproduced in appendix IV, DHS concurred with both of our recommendations. In comments provided in an email from the DOJ Audit Liaison, DOJ also concurred with both recommendations. In addition, DHS, DOJ, and ODNI provided technical comments which we incorporated as appropriate. The Departments of Education and Health and Human Services did not comment on the report. DHS, in its letter, concurred with our recommendation to develop a cohesive strategy that includes outcomes for CVE activities. DHS also recognized that additional strategic-level performance documentation will improve coordination and collaboration tasks among partner agencies, as well as define how cross-cutting tasks will be implemented and how they will measurably contribute to achieving federal CVE goals. DHS noted that the CVE Task Force is developing measurable outcomes to support and guide the development of performance, effectiveness, and benchmarks for federally sponsored CVE efforts. DHS stated that the CVE Task Force plans to report on the progress of implementing the 2016 Strategic Implementation Plan in January 2018. DOJ also concurred with the recommendation in comments received by email. DHS also concurred with our recommendation to establish and implement a process to assess overall progress in CVE, including its effectiveness. DHS, in its comment letter, recognized that such a process will drive an understanding of the contributions of individual activities in the federal CVE effort. In DHS’s response, the department maintained that the CVE Task Force will not be engaged in specific evaluations of its members or partners, but instead will develop resource guides on methodologies and measures that federal and non-government partners can use in evaluating their own CVE efforts. As noted in our report, the CVE Task Force’s approach of providing guidance on evaluations might enhance the evaluation efforts of individual programs, but establishing a process that assesses progress and effectiveness across the federal CVE effort can provide better insight into the successes and gaps within this multi- agency collaborative effort. DOJ also concurred with the recommendation in comments received by email. We are sending copies of this report to Secretary of Education, the Secretary of Health and Human Services, the Secretary of Homeland Security, the Attorney General, the Director of the Office of National Intelligence and appropriate congressional committees and members, and other interested parties. In addition, this report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions, please contact Diana Maurer at (202) 512-8777 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made significant contributions to this report are listed in appendix IV. This report addresses the extent to which (1) the Department of Homeland Security (DHS), the Department of Justice (DOJ), and other key stakeholders tasked with Countering Violent Extremism (CVE) in the United States have implemented the 2011 Strategic Implementation Plan (SIP) and (2) the federal government has developed a strategy to implement CVE activities, and the CVE Task Force has developed a process for assessing overall progress. To assess the extent to which DHS, DOJ, and other key stakeholders tasked with CVE in the United States implemented the 2011 SIP, we collected and analyzed information from each agency responsible for leading a task in the 2011 SIP, which included DHS, DOJ, the Federal Bureau of Investigation (FBI), and the National Counterterrorism Center (NCTC). The FBI was treated as a lead agency for reporting purposes because it was listed as a lead agency in the SIP. These four agencies were responsible for domestic CVE activities and were collectively responsible for implementing 44 out of the 47 tasks in the SIP. We did not analyze the implementation of 3 of the 47 tasks because they were international in scope and led by an agency outside of the four agencies responsible for domestic CVE. Specifically, we did not analyze the Department of Treasury’s efforts to address terrorism financing, the Department of Defense’s effort to provide training to military personnel, and the State Department’s international exchange program. GAO asked for information from each lead agency on actions taken from December 2011 through December 2016 to address their assigned activities in the 2011 SIP. Based on information provided, one analyst analyzed each agency’s action(s) to determine whether each task in the SIP had been implemented, was still in progress, or had not been addressed. A separate analyst independently reviewed each assessment and narrative. If there was disagreement on a rating, a third analyst reviewed that information and made a determination on the final rating. Upon preliminary completion of the appendix table, GAO sent the table to DHS, DOJ, FBI, and NCTC and incorporated technical comments as appropriate. The results of this assessment are shown in appendix III. To determine the extent to which the federal government has developed a strategy to implement CVE activities and the CVE Task Force has developed a process for assessing overall progress, we reviewed the National Strategy for Empowering Local Partners to Prevent Violent Extremism in the United States, the 2011 and 2016 Strategic Implementation Plans for the strategy, and other documents related to the creation and activities of the CVE Task Force. Specifically, we reviewed these documents to identify whether measurable outcomes and associated metrics had been defined. We interviewed officials from the stakeholder agencies including DHS, DOJ, the Department of Education, the Department of Health and Human Services, FBI, and NCTC to discuss their approaches to CVE and their roles and responsibilities as part of the federal CVE effort. We compared the practices of the Task Force to selected leading practices of multi-agency collaborative efforts identified in prior GAO work as well as selected practices in the GPRA Modernization Act of 2010. Practices were selected for comparison based on their applicability to the CVE Task Force. For context and perspectives on how CVE activities were implemented in local areas we interviewed a non-generalizable group of community organizations selected based on their location in the three pilot cities that have adopted CVE frameworks: Los Angeles, California; Boston, Massachusetts; and Minneapolis-St. Paul, Minnesota. We conducted this performance audit from October 2015 to April 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides details on the violent extremist attacks in the United States based on the U.S. Extremist Crime Database (ECDB) data and as described in the background section of this report. Specifically, tables 1 and 2 show a description, date, location and number of victim fatalities for each far right and radical Islamist attack between September 12, 2001 and December 31, 2016. During this period, no persons in the United States were killed in attacks carried out by persons believed to be motivated by extremist environmental beliefs, extremist “animal liberation” beliefs, or extremist far left beliefs. The information on these attacks, including the motivations of the attackers, is from the ECDB, maintained by National Consortium for the Study of Terrorism and Responses to Terrorism (START), at the University of Maryland. START is a Department of Homeland Security (DHS) Center of Excellence. The ECDB tracks violent extremist incidents in the United States since 1990. For our analysis, we included the time period from September 12, 2001 through December 31, 2016, to show violent extremist attacks that have occurred since the September 11, 2001 attacks. We assessed the reliability of this data source through review of database documentation and interviews with the ECDB principle investigators. We discussed cases with the ECDB investigators to clarify details as needed. We determined that this data source was sufficiently reliable for providing background information on the problem of violent extremism in the United States, including the number of attacks and fatalities by ideological motivation (far right or radical Islamist), year and location. Far right violent extremist attackers are characterized by ECDB as having beliefs that include some or all of the following: Fiercely nationalistic (as opposed to universal and international in Suspicious of centralized federal authority; Reverent of individual liberty (especially right to own guns; be free of Belief in conspiracy theories that involve a grave threat to national sovereignty and/or personal liberty; Belief that one’s personal and/or national “way of life” is under attack and is either already lost or that the threat is imminent; and Belief in the need to be prepared for an attack either by participating in or supporting the need for paramilitary preparations and training or survivalism. In addition, according to the ECDB, many persons having violent extreme far right views express support for some version of white supremacy, the Ku Klux Klan, and neo-Nazism. According to the ECDB, attackers with violent radical Islamist beliefs were generally those who professed some form of belief in or allegiance to the Islamic State of Iraq and Syria (ISIS), al-Qa’ida, or other (radical) Islamist- associated terrorist entities. ECDB’s determination of these beliefs are based on statements made by attackers prior to, during, or after their attacks that showed a belief in violent extremist interpretations of Islam, or evidence gathered by police and other sources about the attackers. According to the ECDB, all information in the database is collected from publicly available sources, including mass media reports. ECDB analyzes this information using a standardized and consistent methodology to characterize each attack in terms of the ideological motivation. In addition, ECDB rates the confidence in this assessment of ideological motivations using standard definitions of the factors that lead a confidence level on a scale from 0 to 4, where 0 is the lowest level of confidence and 4 is the highest level of confidence. During our reliability assessment, it was determined that the far right-motivated attacks included 12 incidents where there was unclear evidence about the motivation of the attacker; these 12 were excluded from our analysis. In August 2011, the White House issued the National Strategy for Empowering Local Partners to Prevent Violent Extremism in the United States followed by The National Strategy for Empowering Local Partners to Prevent Violent Extremism in the United States, Strategic Implementation Plan (SIP) in December 2011. The SIP designated the Department of Homeland Security (DHS), the Department of Justice (DOJ), the Federal Bureau of Investigation (FBI), and the National Counterterrorism Center (NCTC) as leads or partners for the 44 domestically-focused tasks identified in the 2011 SIP. From December 2011 through December 2016, federal agencies implemented 19 tasks, had 23 tasks in progress, and had not yet taken action on 2 tasks. The tasks fall under three categories: community outreach, research and training, and capacity building. The SIP identified 18 community outreach tasks to be implemented by federal agencies. Community outreach aims to enhance federal engagement and support to local communities that may be targeted by violent extremism. For example, community outreach might include expanding relationships with local business and communities to identify or prevent violent extremism or integrating CVE activities into community- oriented policing efforts. The SIP identified 20 research and training tasks to be implemented by federal agencies. Research and training relates to understanding the threat of violent extremism, sharing information, and leveraging it to train government and law enforcement officials. We analyzed implementation of 19 research and training tasks in the SIP to determine the extent they had been implemented by the responsible agency(s). The SIP identified 9 capacity building tasks to be implemented by federal agencies. Capacity building might include outreach to former violent extremists to counter violent narratives. We analyzed the implementation of 8 capacity building tasks in the SIP to determine the extent to which they had been implemented by the responsible agency(s). In addition to the individual named above, Joseph Cruz (Assistant Director), Eric Hauswirth, Kevin Heinz, Tyler Kent, Thomas Lombardi, Jonathan Tumin, Amber Sinclair, and Adam Vogt made significant contributions to the report.
Violent extremism—generally defined as ideologically, religious, or politically- motivated acts of violence—has been perpetrated in the United States by white supremacists, anti-government groups, and radical Islamist entities, among others. In 2011, the U.S. government developed a national strategy and SIP for CVE aimed at providing information and resources to communities. In 2016, an interagency CVE Task Force led by DHS and DOJ was created to coordinate CVE efforts. GAO was asked to review domestic federal CVE efforts. This report addresses the extent to which (1) DHS, DOJ, and other key stakeholders tasked with CVE in the United States have implemented the 2011 SIP and (2) the federal government has developed a strategy to implement CVE activities, and the CVE Task Force has assessed progress. GAO assessed the status of activities in the 2011 SIP; interviewed officials from agencies leading CVE efforts and a non-generalizable group of community-based entities selected from cities with CVE frameworks; and compared Task Force activities to selected best practices for multi- agency efforts. As of December 2016, the Department of Homeland Security (DHS), Department of Justice (DOJ), Federal Bureau of Investigation, and National Counterterrorism Center had implemented 19 of the 44 domestically-focused tasks identified in the 2011 Strategic Implementation Plan (SIP) for countering violent extremism (CVE) in the United States. Twenty-three tasks were in progress and no action had yet been taken on 2 tasks. The 44 tasks aim to address three core CVE objectives: community outreach, research and training, and capacity building. Implemented tasks include, for example, DOJ conducting CVE outreach meetings to communities targeted by violent extremism and DHS integrating CVE content into law enforcement counterterrorism training. Tasks in progress include, for example, DHS building relationships with the social media industry and increasing training available to communities to counter violent extremists online. Tasks that had not yet been addressed include, implementing CVE activities in prisons and learning from former violent extremists. Federal CVE efforts aim to educate and prevent radicalization before a crime or terrorist act transpires, and differ from counterterrorism efforts such as collecting evidence and making arrests before an event has occurred. The federal government does not have a cohesive strategy or process for assessing the overall CVE effort. Although GAO was able to determine the status of the 44 CVE tasks, it was not able to determine if the United States is better off today than it was in 2011 as a result of these tasks. This is because no cohesive strategy with measurable outcomes has been established to guide the multi-agency CVE effort. Such a strategy could help ensure that the individual actions of stakeholder agencies are measureable and contributing to the overall goals of the federal government's CVE effort. The federal government also has not established a process by which to evaluate the effectiveness of the collective CVE effort. The CVE Task Force was established in part to evaluate and assess CVE efforts across the federal government, but has not established a process for doing so. Evaluating the progress and effectiveness of the overall federal CVE effort could better help identify successes, gaps, and resource needs across stakeholder agencies. GAO recommends that DHS and DOJ direct the CVE Task Force to (1) develop a cohesive strategy with measurable outcomes and (2) establish a process to assess the overall progress of CVE efforts. DHS and DOJ concurred with both recommendations and DHS described the CVE Task Force's planned actions for implementation.
VA’s mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring that they receive medical care, benefits, social support, and lasting memorials. Over time, the use of IT has become increasingly crucial to the department’s effort to provide benefits and services. VA relies on its systems for medical information and records for veterans, as well as for processing benefit claims, including compensation and pension and education benefits. In reporting on VA’s IT management over the past several years, we have highlighted challenges the department has faced in enabling its employees to help veterans obtain services and information more quickly and effectively while also safeguarding personally identifiable information. A major challenge was that the department’s information systems and services were highly decentralized, giving the administrations a majority of the IT budget. In addition, VA’s policies and procedures for securing sensitive information needed to be improved and implemented consistently across the department. As we have previously pointed out, it is crucial for the department CIO to ensure that well-established and integrated processes for leading, managing, and controlling investments in information systems and programs are followed throughout the department. Similarly, a contractor’s assessment of VA’s IT organizational alignment, issued in February 2005, noted the lack of control over how and when money is spent. The assessment noted that the focus of department-level management was only on reporting expenditures to the Office of Management and Budget and Congress, rather than on managing these expenditures within the department. In response to the challenges that we and others have noted, the department officially began its effort to provide the CIO with greater authority over IT in October 2005. At that time, the Secretary issued an executive decision memorandum granting approval for the development of a new management structure for the department. According to VA, its goals in moving to centralized management are to enable the department to perform better oversight of the standardization, compatibility, and interoperability of systems, as well as to have better overall fiscal discipline for the budget. In February 2007, the Secretary approved the department’s new organizational structure, which includes the Assistant Secretary for Information and Technology, who serves as VA’s CIO. As shown in figure 1, the CIO is supported by a principal deputy assistant secretary and five deputy assistant secretaries—new senior leadership positions created to assist the CIO in overseeing functions such as cyber security, IT portfolio management, systems development, and IT operations. In addition, the Secretary approved an IT governance plan in April 2007 that is intended to enable the Office of Information and Technology to centralize its decision making. The plan describes the relationship between IT governance and departmental governance and the approach the department intends to take to enhance IT governance. The department also made permanent the transfer of its entire IT workforce under the CIO, consisting of approximately 6,000 personnel from the administrations. Figure 2 shows a timeline of the realignment effort. Although VA has fully addressed two of six critical success factors that we identified as crucial to a major organizational transformation such as the realignment, it has not fully addressed the other four factors, and it has not kept to its scheduled timelines for implementing new management processes that are the foundation of the realignment. Consequently, the department is in danger of not being able to meet its target of completing the realignment in July 2008. In addition, although it has prioritized its implementation of the new management processes, none has yet been implemented. In our recent report, we made six recommendations to ensure that VA’s realignment is successfully accomplished; the department generally concurred with our recommendations and stated that it had actions planned to address them. We have identified critical factors that organizations need to address in order to successfully transform an organization to be more results oriented, customer focused, and collaborative in nature. Large- scale change management initiatives are not simple endeavors and require the concentrated efforts of both leadership and employees to realize intended synergies and to accomplish new organizational goals. There are a number of key practices that can serve as the basis for federal agencies to transform their cultures in response to governance challenges, such as those that an organization like VA might face when transforming to a centralized IT management structure. The department has fully addressed two of six critical success factors that we identified (see table 1). Ensuring commitment from top leadership. The department has fully addressed this success factor. As described earlier, the Secretary of VA has fully supported the realignment. He approved the department’s new organizational structure and provided resources for the realignment effort. However, the Secretary recently submitted his resignation, indicating that he intended to depart by October 1, 2007. While it is unclear what effect the Secretary’s departure will have on the realignment, the impending departure underscores the need for consistent support from top leadership through the implementation of the realignment, to ensure that its success is not at risk in the future. Establishing a governance structure to manage resources. The department has fully addressed this success factor. The department has established three governance boards, which have begun operation. The VA IT Governance Plan, approved April 2007, states that the establishment and operation of these boards will assist in providing the department with more cost-effective use of IT resources and assets. The department also has plans to further enhance the governance structure in response to operational experience. The department found that the boards’ responsibilities need to be more clearly defined in the IT Governance Plan to avoid overlap. That is, one board (the Business Needs and Investment Board) was involved in the budget formulation for fiscal year 2009, but budget formulation is also the responsibility of the Deputy Assistant Secretary for IT Resource Management, who is not a member of this board. According to the Principal Deputy Assistant Secretary for Information and Technology, the department is planning to update its IT Governance Plan within a year to include more specificity on the role of the governance boards in VA’s budget formulation process. Such an update could further improve the structure’s effectiveness. Linking IT strategic plan to organization strategic plan. The department has partially addressed this success factor. VA has drafted an IT Strategic Plan that provides a course of action for the Office of Information and Technology over 5 years and addresses how IT will contribute to the department’s strategic plan. According to the Deputy Director of the Quality and Performance Office, the draft IT strategic plan should be formally approved in October 2007. Finalizing the plan is essential to helping ensure that leadership understands the link between VA’s organizational direction and how IT is aligned to meet its goals. Using workforce strategic management to identify proper roles for all employees. The department has partially addressed this success factor. The department has begun to identify job requirements, design career paths, and determine recommended training for the staff that were transferred as part of the realignment. According to a VA official, the department identified 21 specialized job activities, such as applications software and end user support, and has defined competency and proficiency targets for 6 of these activities. Also, by November 2007, VA expects to have identified the career paths for approximately 5,000 of the 6,000 staff that have been centralized under the CIO. Along with the development of the competency and proficiency targets, the department has identified recommended training based on grade level. However, the department has not yet established a knowledge and skills inventory to determine what skills are available in order to match roles with qualifications for all employees within the new organization. It is crucial that the department take the remaining steps to fully address this critical success factor, so that the staff transferred to the Office of Information and Technology are placed in positions that best suit their knowledge and skills, and the organization has the personnel resources capable of developing and delivering the services required. Communicating change to all stakeholders. The department has partially addressed this success factor. The department began publishing a bimonthly newsletter in June to better communicate with all staff about Office of Information and Technology activities, including the realignment. However, the department has not yet fully staffed the Business Relationship Management Office or identified its leadership. This office is to serve as the single point of contact between the Office of Information and Technology and the administrations; in this role, it provides the means for the Office of Information and Technology to understand customer requirements, promote services to customers, and monitor the quality of the delivered services. A fully staffed and properly led Business Relationship Management Office is important to ensure effective communication between the Office of Information and Technology and the administrations. Communicating the changed roles and responsibilities of the central IT organization versus the administrations is one of the important functions of the Business Relationship Management Office. These changes are crucial to software development, among other things. Before the centralization of the management structure, each of the administrations was responsible for its own software development. For example, the department’s health information system—the Veterans Health Information System and Technology Architecture (VistA)—was developed in a decentralized environment. The developers and the doctors, closely collaborating at local facilities, developed and adapted this system for their own specific clinic needs. The result of their efforts is an electronic medical record that has been fully embraced by the physicians and nurses. However, the decentralized approach has also resulted in each site running a stand-alone version of VistA that is costly to maintain; in addition, data at the sites are not standardized, which impedes the ability to exchange computable information. Under the new organization structure, approval of development changes for VistA will be centralized at the Veterans Health Administration headquarters and then approved for development and implementation by the Office of Information and Technology. The communications role of the Business Relationship Management Office is thus an important part of the processes needed to ensure that users’ requirements will be addressed in system development. Dedicating an implementation team to manage change. The department has not addressed this success factor. A dedicated implementation team that is responsible for the day-to-day management of a major change initiative is critical to ensure that the project receives the focused, full-time attention needed to be sustained and successful. VA has not identified such an implementation team to manage the realignment. Rather, the department is currently managing the realignment through two organizations: the Process Improvement Office under the Quality and Performance Office (which will lead process improvements) and the Organizational Management Office (which will advise and assist the CIO during the final transformation to a centralized structure). However, the Executive Director of the Organizational Management Office has recently resigned his position, leaving one of the two responsible offices without leadership. In our view, having a dedicated implementation team to manage major change initiatives is crucial to successful implementation of the realignment. An implementation team can assist in tracking implementation goals and identifying performance shortfalls or schedule slippages. The team could also provide continuity and consistency in the face of any uncertainty that could potentially result from the Secretary’s resignation. Accordingly, in our recent report we recommended that the department dedicate an implementation team to be responsible for change management throughout the transformation and that it establish a schedule for the implementation of the management processes. As the foundation for its realignment, VA plans to implement 36 management processes in five key areas: enterprise management, business management, business application management, infrastructure, and service support. These processes, which address all aspects of IT management, were recommended by the department’s realignment contractor and are based on industry best practices. According to the contractor, they are a key component of the realignment effort as the Office of Information and Technology moves to a process-based organization. Additionally, the contractor noted that with a system of defined processes, the Office of Information and Technology could quickly and accurately change the way IT supports the department. The department had planned to begin implementing the 36 management processes in March 2007; however, as of early May 2007, it had only begun pilot testing two of these processes. The Deputy Director of the Quality and Performance Office reported that the initial implementation of the first two processes will begin in the second quarter of 2008. The Principal Deputy Assistant Secretary for Information and Technology acknowledged that the department is behind schedule for implementing the processes, but it has prioritized the processes and plans to implement them in three groups, in order of priority (see attachment 1 for a description of the processes and their implementation priority). According to the Deputy Director of the Quality and Performance Office, the approach and schedule for process implementation is currently under review. Work on the 10 processes associated with the first group is under way, and implementation plans and time frames are being revised. This official told us that initial planning meetings have occurred and primary points of contact have been designated for the financial management and portfolio management processes, which are to be implemented as part of the first group. The department also noted that it will work to meet its target date of July 2008 for the realignment, but that all of the processes may not be fully implemented at that time. According to the Principal Deputy Assistant Secretary for Information and Technology, the department has fallen behind schedule with process implementation for two reasons: ● The department underestimated the amount of work required to redefine the 36 process areas. Process charters for each of the processes were developed by a VA contractor and provide an outline for operation under the new management structure. Based on its initial review, the department found that the processes are complicated and multilayered, involving multiple organizations. In addition, the contractor provided process charters and descriptions based on a commercial, for-profit business model, and so the department must readjust them to reflect how VA conducts business. ● With the exception of IT operations, the Veterans Health Administration operates in a decentralized manner. For example, the budget and spending for the medical centers are under the control of the medical center directors. In addition, the Office of Information and Technology only has ownership over about 30 percent of all activities within the financial management process. For example some elements within this process area (such as tracking and reporting on expenditures) are the responsibility of the department’s Office of Management; this office is accountable for VA’s entire budget, including IT dollars. Thus, the Office of Information and Technology has no authority to direct the Office of Management to take particular actions to improve specific financial management activities. The department faces the additional obstacle that it has not yet staffed crucial leadership positions that are vital to the implementation of the management processes. As part of the new organizational structure, the department identified 25 offices whose leaders will report to the five deputy assistant secretaries and are responsible for carrying out the new management processes in daily operations. However, as of early September, 7 of the leadership positions for these 25 offices were vacant, and 4 were filled in an acting capacity. According to the Principal Deputy Assistant Secretary for Information and Technology, hiring personnel for senior leadership positions has been more difficult than anticipated. With these leadership positions remaining vacant, the department will face increased difficulties in supporting and sustaining the realignment through to its completion. Until the improved processes have been implemented, IT programs and initiatives will continue to be managed under previously established processes that have resulted in persistent management challenges. Without the standardization that would result from the implementation of the processes, the department risks cost overruns and schedule slippages for current initiatives, such as VistA modernization, for which about $682 million has been expended through fiscal year 2006. Recognizing the importance of securing federal systems and data, Congress passed the Federal Information Security Management Act (FISMA) in December 2002, which sets forth a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. Using a risk-based approach to information security management, the act requires each agency to develop, document, and implement an agencywide information security program for the data and systems that support the operations and assets of the agency. According to FISMA, the head of each agency has responsibility for delegating to the agency CIO the authority to ensure compliance with the security requirements in the act. To carry out the CIO’s responsibilities in the area, a senior agency official is to be designated chief information security officer (CISO). The May 2006 theft from the home of a VA employee of a computer and external hard drive (which contained personally identifiable information on approximately 26.5 million veterans and U.S. military personnel) prompted Congress to pass the Veterans Benefits, Health Care, and Information Technology Act of 2006. Under the act, the VA’s CIO is responsible for establishing, maintaining, and monitoring departmentwide information security policies, procedures, control techniques, training, and inspection requirements as elements of the departmental information security program. The act also includes provisions to further protect veterans and service members from the misuse of their sensitive personally identifiable information. In the event of a security incident involving personally identifiable information, VA is required to conduct a risk analysis, and on the basis of the potential for compromise of personally identifiable information, the department may provide security incident notifications, fraud alerts, credit monitoring services, and identity theft insurance. Congress is to be informed regarding security incidents involving the loss of personally identifiable information. In a report released last week, we stated that although VA has made progress in addressing security weaknesses, it has not yet fully implemented key recommendations to strengthen its information security practices. It has not implemented two of our four previous recommendations and 20 of 22 recommendations made by the department’s inspector general. Among the recommendations not implemented are our recommendation that it complete a comprehensive security management program and inspector general recommendations to appropriately restrict access to data, networks, and VA facilities; ensure that only authorized changes are made to computer programs; and strengthen critical infrastructure planning to ensure that information security requirements are addressed. Because these recommendations have not yet been implemented, unnecessary risk exists that personally identifiable information of veterans and other individuals, such as medical providers, will be exposed to data tampering, fraud, and inappropriate disclosure. The need to fully implement GAO and IG recommendations to strengthen information security practices is underscored by the prevalence of security incidents involving the unauthorized disclosure, misuse, or loss of personal information of veterans and other individuals (see table 2). These incidents were partially due to weaknesses in the department’s security controls. In these incidents, which include the May 2006 theft of computer equipment from an employee’s home (mentioned earlier) and the theft of equipment from department facilities, millions of people had their personal information compromised. While the increase in reported incidents in 2006 reflects a heightened awareness on the part of VA employees of their responsibility to report incidents involving loss of personal information, it also indicates that vulnerabilities remain in security controls designed to adequately safeguard information. Since the May 2006 security incident, VA has begun or has continued several major initiatives to strengthen information security practices and secure personally identifiable information within the department. These initiatives include the realignment of its IT management structure, as discussed earlier. Under the realignment, the management structure for information security has changed. In the new organization, the responsibility for managing the program lies with the CISO/Director of Cyber Security (the CISO position has been vacant since June 2006, with the CIO acting in this capacity), while the responsibility for implementing the program lies with the Director of Field Operations and Security. Thus, responsibility for information security functions within the department is divided. VA officials indicated that the heads of the two organizations are communicating about the department’s implementation of security policies and procedures, but this communication is not defined as a role or responsibility for either position in the new management organization book, nor is there a documented process in place to coordinate the management and implementation of the security program. Both of these activities are key security management practices. Without a documented process, policies or procedures could be inconsistently implemented throughout the department, which could prevent the CISO from effectively ensuring departmentwide compliance with FISMA. Until the process and responsibilities for coordinating the management and implementation of IT security policies and procedures throughout the department are clearly documented, VA will have limited assurance that the management and implementation of security policies and procedures are effectively coordinated and communicated. Developing and documenting these policies and procedures are essential for achieving an improved and effective security management process under the new centralized management model. In addition to the realignment initiative, the department also has others under way to address security weaknesses. These include developing an action plan to correct identified weaknesses; establishing an information protection program; improving its incident management capability; and establishing an office to be responsible for oversight of IT within the department. However, implementation shortcomings limit the effectiveness of these initiatives. For example: ● VA’s action plan has task owners assigned and is updated biweekly, but department officials have not ensured that adequate progress has been made to resolve items in the plan. Specifically, VA has extended the completion date at least once for 38 percent of the plan items, and it did not have a process in place to validate the closure of the items. In addition, although numerous items in the plan were to develop or revise a policy or procedure, 87 percent of these items did not have a corresponding task with an established timeframe for implementation. ● VA installed encryption software on laptops at facilities inconsistently; however, VA’s directive on encryption did not address the encryption of laptops that were categorized as medical devices, which make up a significant portion of the population of laptops at Veterans Health Administration facilities. In addition, the department has not yet fully implemented the acquisition of software tools across the department. ● VA has improved its incident management capability since May 2006 by realigning and consolidating two incident management centers, and made a notable improvement in its notification of major security incidents to US-CERT (the U.S. Computer Emergency Readiness Team), the Secretary, and Congress, but the time it took to send notification letters to individuals was increased for some incidents because VA did not have adequate procedures for coordinating incident response and mitigation activities with other agencies and obtaining up-to-date contact information. ● VA established the Office of IT Oversight and Compliance to conduct assessments of its facilities to determine the adequacy of internal controls and investigate compliance with laws, policies, and directives and ensure that proper safeguards are maintained; however, the office lacked a process to ensure that its examination of internal controls is consistent across VA facilities. Until the department addresses recommendations to resolve identified weaknesses and implements the major initiatives it has undertaken, it will have limited assurance that it can protect its systems and information from the unauthorized use, disclosure, disruption, or loss. In our report released last week, we made 17 recommendations to assist the department in improving its ability to protect its information and systems. These recommendations included that VA document clearly define coordination responsibilities for the Director of Field Operations and Security and the Director of Cyber Security and develop and implement a process for these officials to coordinate on the implementation of IT security policies and procedures throughout the department. We also made recommendations to improve the department’s ability to protect its information and systems, including the development of various processes and procedures to ensure that tasks in the department’s security action plans have time frames for implementation. In summary, effectively instituting a realignment of the Office of Information and Technology is essential to ensuring that VA’s IT programs achieve their objectives and that the department has a solid and sustainable approach to managing its IT investments. VA continues to work on improving such programs as information security and systems development. Yet we continue to see management weaknesses in these programs and initiatives (many of a long-standing nature), which are the very weaknesses that VA aims to alleviate with its reorganized management structure. Until the department fully addresses the critical success factors that we identified and carries out its plans to establish a comprehensive set of improved management processes, the impact of this vital undertaking will be diminished. Further, the department may not achieve a solid and sustainable foundation for its new IT management structure. Mr. Chairman and members of the committee, this concludes our statement. We would be happy to respond to any questions that you may have at this time. For more information about this testimony, please contact Valerie C. Melvin at (202) 512-6304 or Gregory C. Wilshusen at (202) 512-6244 or by e-mail at melvinv@gao.gov or wilshuseng@gao.gov. Key contributors to this testimony were made by Barbara Oliver, Assistant Director; Charles Vrabel, Assistant Director; Barbara Collier, Nancy Glover, Valerie Hopkins, Scott Pettis, J. Michael Resser, and Eric Trout. In the following table, the priority group number reflects the order in which the department plans to implement each group of processes, with 1 being the first priority group. 2 Addresses long- and short-term objectives, business direction, and their impact on IT, the IT culture, communications, information, people, processes, technology, development, and partnerships 2 Defines a structure of relationships and processes to direct and control the See note a Identifies potential events that may affect the organization and manages risk to be within acceptable levels so that reasonable assurance is provided regarding the achievement of organization objectives 2 Creates, maintains, promotes, and governs the use of IT architecture models and standards across and within the change programs of an organization 1 Assesses all applications, services, and IT projects that consume resources in order to understand their value to the IT organization 2 Manages the department’s information security program, as mandated by the Federal Information Security Management Act (FISMA) of 2002 3 Generates ideas, evaluates and selects ideas, develops and implements innovations, and continuously recognizes innovators and learning from the experience 1 Plans, organizes, monitors, and controls all aspects of a project in a continuous process so that it achieves its objectives 1 Manages and prioritizes all requests for additional and new technology solutions arising from a customer’s needs 3 Determines whether and how well customers are satisfied with the services, solutions, and offerings from the providers of IT 1 Provides sound stewardship of the monetary resources of the organization 3 Establishes a pricing mechanism for the IT organization to sell its services to internal or external customers and to administer the contracts associated with the selling of those services 3 Enables the IT organization to understand the marketplace it serves, to identify customers, to “market” to these customers, to generate “marketing” plans for IT services and support the “selling” of IT services to internal customers 2 Ensures adherence with laws and regulations, internal policies and procedures, and stakeholder commitments 1 Maintains information regarding technology assets, including leased and purchased assets, licenses, and inventory 2 Enables an organization to provide the optimal mix of staffing (resources and skills) needed to provide the agreed-on IT services at the agreed-on service levels 2 Manages service-level agreements and performs the ongoing review of service achievements to ensure that the required and cost-justifiable service quality is maintained and gradually improved 1 Ensures that agreed-on IT services continue to support business requirements in the event of a disruption to the business 3 Develops and exercises working relationships between the IT organization and suppliers in order to make available the external services and products that are required to support IT service commitments to customers 3 Promotes an integrated approach to identifying, capturing, evaluating, categorizing, retrieving, and sharing all of an organization’s information assets 2 Translates provided customer (business) requirements and IT stakeholder- generated requirements/constraints into solution-specific terms, within the context of a defined solution project or program 1 Creates a documented design from agreed-on solution requirements that describes the behavior of solution elements, the acceptance criteria, and agreed-to measurements 3 Brings together all the elements specified by a solution design via customization, configuration, and integration of created or acquired solution components See note a Validates that the solution components and integrated solutions conform to design specifications and requirements before deployment 2 Addresses the delivery of operational services to IT customers by matching resources to commitments and employing the IT infrastructure to conduct IT operations 3 Ensures that all data required for providing and supporting operational service are available for use and that all data storage facilities can handle normal, expected fluctuations in data volumes and other parameters within their designed tolerances. 3 Identifies and prioritizes infrastructure, service, business and security events, and establishes the appropriate response to those events. 3 Plans, measures, monitors, and continuously strives to improve the availability of the IT infrastructure and supporting organization to ensure that agreed-on requirements are consistently met future identified needs of the business 1 Creates and maintains a physical environment that houses IT resources and optimizes the capabilities and costs of that environment 3 Matches the capacity of the IT services and infrastructure to the current and 1 Manages the life cycle of a change request and activities that measure the effectiveness of the process and provides for its continued enhancement 1 Controls the introduction of releases (that is, changes to hardware and software) into the IT production environment through a strategy that minimizes the risk associated with the changes 1 Identifies, controls, maintains, and verifies the versions of configuration items and their relationships in a logical model of the infrastructure and services 3 Manages each user interaction with the provider of IT service throughout its 2 Restores a service affected by any event that is not part of the standard operation of a service that causes or could cause an interruption to or a reduction in the quality of that service 2 Resolves problems affecting the IT service, both reactively and proactively The department indicated that this process had completed a pilot, but did not assign it to a priority group. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Veterans Affairs (VA) has encountered numerous challenges in managing its information technology (IT) and securing its information systems. In October 2005, the department initiated a realignment of its IT program to provide greater authority and accountability over its resources. The May 2006 security incident highlighted the need for additional actions to secure personal information maintained in the department's systems. In this testimony, GAO discusses its recent reporting on VA's realignment effort as well as actions to improve security over its information systems. To prepare this testimony, GAO reviewed its past work on the realignment and on information security, and it updated and supplemented its analysis with interviews of VA officials. VA has fully addressed two of six critical success factors GAO identified as essential to a successful transformation, but it has yet to fully address the other four, and it has not kept to its scheduled timelines for implementing new management processes that are the foundation of the realignment. That is, the department has ensured commitment from top leadership and established a governance structure to manage resources, both of which are critical success factors. However, the department continues to operate without a single, dedicated implementation team to manage the realignment; such a dedicated team is important to oversee the further implementation of the realignment, which is not expected to be complete until July 2008. Other challenges to the success of the realignment include delays in staffing and in implementing improved IT management processes that are to address long-standing weaknesses. The department has not kept pace with its schedule for implementing these processes, having missed its original scheduled time frames. Unless VA dedicates a team to oversee the further implementation of the realignment, including defining and establishing the processes that will enable the department to address its IT management weaknesses, it risks delaying or missing the potential benefits of the realignment. VA has begun or continued several major initiatives to strengthen information security practices and secure personally identifiable information within the department, but more remains to be done. These initiatives include continuing the department's efforts to reorganize its management structure; developing a remedial action plan; establishing an information protection program; improving its incident management capability; and establishing an office responsible for oversight and compliance of IT within the department. However, although these initiatives have led to progress, their implementation has shortcomings. For example, although the management structure for information security has changed under the realignment, improved security management processes have not yet been completely developed and implemented, and responsibility for the department's information security functions is divided between two organizations, with no documented process for the two offices to coordinate with each other. In addition, VA has made limited progress in implementing prior security recommendations made by GAO and the department's Inspector General, having yet to implement 22 of 26 recommendations. Until the department addresses shortcomings in its major security initiatives and implements prior recommendations, it will have limited assurance that it can protect its systems and information from the unauthorized disclosure, misuse, or loss of personally identifiable information.
Since issuing its first loan guarantee in 2009, DOE’s Loan Programs Office, which administers the LGP and ATVM program, has issued a total of more than $30 billion in loans and loan guarantees. The LGP was originally designed to address a fundamental impediment to innovative and advanced energy projects: securing funding. Projects that entail risks—either that new technology will not perform as expected or that the borrower or project itself will not perform as expected—can face difficulty securing enough affordable financing to survive the period between development and commercialization of innovative technologies. Because the risks that commercial lenders must assume to support new technologies can put the cost of private financing out of reach, companies may not be able to commercialize innovative technologies without the federal government’s financial support. To accurately account for the expected and actual costs of federal loan programs, agencies estimate the costs of a program in accordance with the Federal Credit Reform Act of 1990 by calculating credit subsidy costs for loans and loan guarantees, excluding administrative costs. DOE estimates the credit subsidy cost for each loan or loan guarantee by, among other things, projecting disbursements to the borrower as well as interest and principal repayments from the borrower, and adjusting these projected cash flows for the risk of default and other factors. Paying the credit subsidy cost is either the responsibility of the borrower or the program, depending on whether Congress has provided appropriations to cover such costs. For the LGP, Title XVII of the Energy Policy Act of 2005 (EPAct)— specifically section 1703—authorized DOE to guarantee loans for energy projects that (1) use new or significantly improved technologies as compared with commercial technologies already in service in the United States and (2) avoid, reduce, or sequester emissions of air pollutants or man-made greenhouse gases. Congress provided DOE $34 billion in loan guarantee authority for section 1703 loan guarantees. Initially, Congress provided no appropriation to cover the credit subsidy costs of loan guarantees under section 1703, requiring all borrowers receiving a loan guarantee to pay to offset the credit subsidy costs of their own projects. In February 2009, Congress passed the American Recovery and Reinvestment Act of 2009 (Recovery Act), which amended Title XVII by adding section 1705, under which DOE could guarantee loans for projects using existing commercial technologies. For section 1705, the Recovery Act provided $2.5 billion to cover credit subsidy costs, which DOE estimated would suffice to cover those costs for about $18 billion in loan guarantees. In April 2011, Congress appropriated $170 million to pay credit subsidy costs for a subset of projects under section 1703, specifically, energy efficiency and renewable energy projects. DOE estimated this appropriation would cover those costs for about $848 million in loan guarantees. As table 1 shows, DOE had about $28.7 billion remaining in loan guarantee authority under section 1703 as of November 2014. At that time, it also had three open solicitations for loan guarantee applications that accounted for much of that remaining authority. The ATVM loan program remains open to applications on a rolling basis and had about $16 billion remaining in loan authority as of November 2014. DOE has made efforts to improve its loan program implementation and oversight and, to date, has taken actions in response to 15 of our 24 prior recommendations. (See app. I for details on the status of each of the 24 recommendations we have made concerning the DOE loan programs). In 2007, 2008, and 2010—which covered the early stages of the LGP—we made 15 recommendations to address numerous issues where DOE had moved forward with the program before key elements were in place. DOE implemented 11 of our 15 recommendations from this period. For example: In our February 2007 report, we found that DOE’s actions had focused on expediting program implementation—such as soliciting preapplications for loan guarantees—rather than ensuring the department had in place the critical policies, procedures, and mechanisms necessary to better ensure the program’s success. We made five recommendations addressing these concerns. DOE agreed with and implemented all 5 of these recommendations by establishing key policies and procedures and issuing final program regulations, among other things. In contrast, in our July 2010 report, we found that, among other things, DOE had favored some applicants by, for example, deviating from its stated review procedures. DOE did not concur with—and has not taken actions to address—our recommendation that it take steps to ensure that its implementation of the LGP treats applicants consistently. As Congress expanded the DOE loan programs to include 1705 projects and ATVM, we issued additional reports in 2011, 2012, and 2014 highlighting our concerns about DOE making loans and disbursing funds without having sufficient expertise and performance measures, among other things. Our reports included recommendations to address these issues from February 2011 through May 2014. To date, DOE has implemented four of the nine recommendations but has not addressed the remaining five. For example: In February 2011, we found that DOE was using ATVM staff with largely financial, and not technical, expertise to evaluate the progress of projects to produce more fuel-efficient passenger vehicles and their components. We recommended that DOE accelerate efforts to engage sufficient engineering expertise to verify that borrowers are delivering projects as required by the loan agreements. DOE implemented our recommendation by changing its budgeting practices for monitoring ATVM loans to better ensure that funds would be available to engage independent engineering expertise; DOE also changed its policy for engaging technical expertise to align with the Title XVII LGP policy.  Also in our February 2011 report, we found that DOE did not have sufficient performance measures that would enable the department to fully assess whether the ATVM program had achieved its program goals, including protecting taxpayers’ financial interests. We recommended that DOE develop sufficient and quantifiable performance measures for its program goals. DOE disagreed with this recommendation and took no steps to implement it. As a result, Congress does not have important information on whether the funds DOE has spent so far are furthering the program’s goals and, consequently, whether the program warrants continued support. DOE generally agreed with most of the additional recommendations we made in our March 2012 and May 2014 reports as the programs expanded, but it has not fully implemented them. For example, in May 2014 we found that DOE adhered to its monitoring policies inconsistently or not at all because the Loan Programs Office was still developing its organizational structure, including its staffing. We recommended that DOE fully develop its organizational structure by staffing key loan monitoring positions, among other things. DOE agreed and has taken steps to identify key staffing positions but, as of February 2016, most of these positions remain unfilled. Filling these positions would help DOE carry out activities critical to monitoring these loans. In our April 2015 report, we found that DOE estimated the credit subsidy costs of the loans and loan guarantees in its portfolio to be about $2.2 billion as of November 2014, including about $807 million for five loans on which the borrowers had defaulted. At that time, the portfolio consisted of 34 loans and loan guarantees in support of 30 projects in a diverse array of technologies. We also found that administrative costs totaled about $312 million from fiscal year 2008 through fiscal year 2014. The estimated $2.2 billion in credit subsidy costs was a decrease from initial DOE estimates totaling about $4.5 billion, and we found that changes in credit subsidy cost estimates varied by loan program and the type of technology supported by the loans and loan guarantees, and by other factors, such as the availability of a steady stream of revenue for a project. Specifically, defaults on loan guarantees for two solar manufacturing projects and one energy storage project were largely responsible for an increase in the credit subsidy cost estimate for DOE’s LGP portfolio from $1.33 billion (when the loan guarantees were issued) to $1.81 billion as of November 2014. Borrowers also defaulted on two ATVM loans, but the credit subsidy cost estimate for DOE’s ATVM loan program’s portfolio decreased from initial DOE estimates totaling about $3.16 billion to $404 million as of November 2014, mainly because of a significant improvement in the credit rating of one loan. This decrease was enough to more than offset the increases from the defaults in DOE’s overall loan portfolio. See table 2 for changes in DOE’s credit subsidy cost estimates. We found in our April 2015 report that most projects in DOE’s portfolio have completed construction and are in operation—producing power or automobiles, for instance. None of the projects with loans in default had revenue streams that were provided for under long-term contracts for the sale of energy produced by the project pursuant to a power purchase agreement, offtake agreement, or similar contractual language. Power purchase agreements and offtake agreements generally guarantee a stream of revenue to the project owner for 20 or 25 years after the project begins generating electricity, effectively ensuring a buyer for the produced power. In DOE’s portfolio, 21 of the 30 projects supported by the program included power purchase or offtake agreements. Regarding administrative costs, our April 2015 report found that such costs for the programs have totaled about $312 million from fiscal year 2008 through fiscal year 2014, including approximately $251.6 million for LGP and $60.6 million for the ATVM loan program. We also found that, for the LGP, the fees DOE has collected have not been sufficient to cover all of its administrative expenses for the program, in part because the maintenance fees on the current loan guarantees were too low to cover ongoing monitoring costs. As a result, some of the administrative expenses have been paid with taxpayer funds. DOE addressed the low maintenance fee levels by changing the fee structure in its new solicitations, announced from December 2013 to December 2014, to allow increased maintenance fees—up to $500,000 per year. DOE officials told us that the new fee structure should allow DOE to cover a greater portion of LGP monitoring costs on new loan guarantees. However, the actual fee amounts will depend on the individual loan guarantees and negotiation of the loan guarantee agreements, making predictions of future fee income a challenge. It is now too early to tell whether DOE’s actions will result in sufficient funds to offset LGP’s future administrative costs. Chairmen Weber and Loudermilk, Ranking Members Grayson and Beyer, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff members have any future questions about this testimony, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Karla Springer, Assistant Director; Michael Krafve; Cynthia Norris; Barbara Timmerman; and Jarrod West. GAO-07-339R Recommendation The Secretary of Energy should ensure that the department, before selecting eligible projects for loan guarantees, establishes policies and procedures to account for loan guarantees. Action taken In May 2007, the Department of Energy (DOE) implemented this recommendation when its Office of Finance and Accounting established standard operating procedures for accounting and reporting for DOE loan programs (SOP 1.4). Among other things, the procedures enable DOE to account for payments received from applicants for administrative costs, which is important because the Energy Policy Act of 2005, which established the Loan Guarantee Program (LGP), requires that borrowers be charged fees to cover DOE’s costs to administer the program. DOE established the procedures before it issued the first loan guarantee in 2010, meeting the intent of our recommendation. The Secretary of Energy should ensure that the department, before selecting eligible projects for loan guarantees, establishes policies and procedures for developing subsidy and administrative cost estimates. In March 2009, DOE issued a Credit Policies and Procedures Manual that lays out policies and procedures for estimating subsidy costs and defines administrative costs. In addition, according to DOE, in November 2008 the Office of Management and Budget approved the LGP’s model for calculating the credit subsidy costs of loan guarantees. DOE’s solicitations describe how it will charge these administrative costs to applicants. These actions meet the intent of our recommendation. The Secretary of Energy should ensure that the department, before selecting eligible projects for loan guarantees, establishes policies and procedures for selecting lenders and loans to guarantee and for monitoring lenders and loans once the guarantees have been issued. Closed - Implemented DOE satisfied our recommendation to establish policies and procedures for selecting lenders and loans to guarantee and for monitoring lenders and loans once the guarantees have been issued. On October 23, 2007, and December 4, 2009, DOE issued final rules that incorporated policies and procedures for the issuance of solicitations, submission of applications, and the evaluation of loan guarantee applications. The rules also lay out the requirements for eligible lenders. In addition, on March 5, 2009, DOE issued a credit policies and procedures manual for the program that provides further detail on policies and procedures for selecting lenders and loans to guarantee. The manual also provides policies and procedures for credit monitoring of projects once loan guarantees have been issued. The Secretary of Energy should ensure that the department, before selecting eligible projects for loan guarantees, issues final program regulations that protect the government’s interests, manage risk, and ensure that borrowers are aware of program requirements. Closed - Implemented On October 23, 2007, and December 4, 2009, DOE issued final rules implementing its Title XVII LGP for innovative energy technologies. The rules elaborate on the program established by Title XVII by defining the technologies and types of projects covered by the program, as well as the financial structure required for projects. Issuing a rule is in keeping with the intent of our recommendation to provide greater protection of the government’s interests because this rule, like other regulations, cannot be changed without public or congressional input and carries the force of law. The Secretary of Energy should ensure that the department, before selecting eligible projects for loan guarantees, further defines program goals and objectives tied to outcome measures for determining program effectiveness. GAO-08-750 Recommendation The Secretary of Energy should direct the Chief Financial Officer to amend application guidance to clarify the program’s equity requirements to the 16 companies invited to apply for loan guarantees and in future solicitations before substantially reviewing LGP applications. Closed - Implemented DOE has taken actions to define program goals and performance measures in order to determine program effectiveness. Status Closed - Implemented DOE substantively addressed our recommendation with its October 2009 and August 2010 solicitations, which provided an expanded definition of equity that also addressed exclusions. The Secretary of Energy should direct the Chief Financial Officer to amend application guidance to further develop and define performance measures and metrics to monitor and evaluate program efficiency, effectiveness, and outcomes before substantially reviewing LGP applications. Closed - Implemented Since our 2008 recommendation, DOE developed nine performance measures to evaluate the program’s efficiency and outcomes, implementing our recommendation. The Secretary of Energy should direct the Chief Financial Officer to amend application guidance to improve the LGP’s full tracking of the program’s administrative costs by developing an approach to track and estimate costs associated with offices that directly and indirectly support the program and including those costs as appropriate in the fees charged to applicants before substantially reviewing LGP applications. In October 2008, the Loans Programs Office (LPO) began using a DOE software system to track administrative costs within the office, including, for example, staff salaries and travel associated with reviewing the applications for various solicitations. In addition, DOE staff in the field office that was reviewing the greatest number of loan guarantee applications reached an agreement with the program concerning performance of and reimbursement for this work. The Secretary of Energy should direct the Chief Financial Officer to amend application guidance to include more specificity on the content of independent engineering reports and on the development of project cost estimates to provide the level of detail needed to better assess overall project feasibility before substantially reviewing LGP applications. Since our 2008 recommendation, DOE increased the content guidelines for engineering reports in later solicitations, partly implementing our recommendation. However, the actions taken by DOE did not fully address the intent of our recommendation. The Secretary of Energy should direct the Chief Financial Officer to clearly define needs for contractor expertise to facilitate timely application reviews before substantially reviewing LGP applications. Closed – Implemented To facilitate timely action on applications for loan guarantees, DOE developed “standing source” lists of contractors with legal, engineering, financial, and marketing expertise. Listed contractors were determined by DOE to be capable of providing specific services that DOE identified. Such contractors were available for selection, under a competitive process, to review projects under consideration for loan guarantees. Developing the standing list helped ensure that DOE would have the necessary expertise readily available during the review process. The Secretary of Energy should direct the Chief Financial Officer to complete detailed internal loan selection policies and procedures that lay out roles and responsibilities and criteria and requirements for conducting and documenting analyses and decision making before substantially reviewing LGP applications. In March 2009, DOE issued a Credit Policies and Procedures Manual that established detailed internal loan selection policies and procedures, including roles and responsibilities for LGP staff, and criteria for conducting analyses and decision making, but the manual did not provide detailed guidance for documenting analyses. In October 2011, LGP revised its Credit Policies and Procedures manual to also include specific instructions to LGP staff to document their analyses and decisions in LGP’s records management system. GAO-10-627 Recommendation The Secretary of Energy should direct the program management to develop relevant performance goals that reflect the full range of policy goals and activities for the program, and to the extent necessary, revise the performance measures to align with these goals. Action taken According to DOE officials, LGP adheres to and supports the current DOE Strategic Plan. However, LGP could not provide documentation or evidence of either an improvement in alignment between DOE performance goals and LGP policy goals or the revision of LGP performance measures. We continue to believe that relevant and revised performance goals and measures would improve DOE’s ability to evaluate and implement the LGP. The Secretary of Energy should direct the program management to revise the process for issuing loan guarantees to clearly establish what circumstances warrant disparate treatment of applicants so that DOE’s implementation of the program treats applicants consistently unless there are clear and compelling grounds for doing otherwise. DOE did not concur with the recommendation and has not taken action to implement it. The Secretary of Energy should direct the program management to develop an administrative appeal process for applicants who believe their applications were rejected in error and document the basis for conclusions regarding appeals. DOE did not concur with the recommendation and has not taken action to implement it. The Secretary of Energy should direct the program management to develop a mechanism to systematically obtain and address feedback from program applicants, and, in so doing, ensure that applicants’ anonymity can be maintained, for example, by using an independent service to obtain the feedback. GAO-11-145 Recommendation The Secretary of Energy should direct the ATVM Program Office to accelerate efforts to engage sufficient engineering expertise to verify that borrowers are delivering projects as agreed. In September 2010, DOE created a mechanism for submitting feedback—including anonymous feedback— through its website. Status Closed – Implemented Since issuance of our report in February 2011, DOE changed its budgeting practices for monitoring ATVM loans to better ensure that funds would be available to engage independent engineering expertise when needed. DOE also changed its policy for engaging technical expertise, making it the same as for the Title XVII LGP. The Secretary of Energy should direct the ATVM Program Office to develop sufficient and quantifiable performance measures for its three goals. In its original comments to our report, and in a subsequent statement of its management decisions, DOE stated that it disagreed with our recommendation. DOE stated its belief that the ATVM program adhered to the requirements of the statute authorizing the program and that the performance measures we suggested would greatly expand the scope of the program—DOE stated it would not develop any new measures not specified by Congress. GAO-12-157 Recommendation The Secretary of Energy should direct the Executive Director of the Loan Programs Office to commit to a timetable to fully implement a consolidated system that enables the tracking of the status of applications and that measures overall program performance. Action taken DOE did not concur with the recommendation and has not taken action to implement it. The Secretary of Energy should direct the Executive Director of the Loan Programs Office to ensure that the new records management system contains documents supporting past decisions, as well as those in the future. DOE concurred with this recommendation but has not provided us with information regarding its implementation. The Secretary of Energy should direct the Executive Director of the Loan Programs Office to regularly update the LGP’s credit policies and procedures manual to reflect current program practices to help ensure consistent treatment for applications to the program. In December 2015, DOE published its revised LPO credit policies and procedures manual, which sets the basic criteria for the determination of eligibility, underwriting of loan and loan guarantee requests, and the management of closed loans and loan guarantees. GAO-14-367 Recommendation The Secretary of Energy should direct the Executive Director of the Loan Programs Office to fully develop its organizational structure by staffing key monitoring positions. Action taken DOE officials told us that they developed short- and long-term plans for staffing key loan monitoring positions and risk mitigation positions within the Portfolio Management Division and Risk Management Division, respectively. In February 2016, DOE provided us with evidence that it had identified 24 key positions in these two divisions; however, most of these positions remain unfilled, so the recommendation status remains open. The Secretary of Energy should direct the Executive Director of the Loan Programs Office to fully develop its organizational structure by updating management and reporting software. In February 2016, DOE officials provided us with evidence that they had completed and implemented updates for their management and reporting systems. The Secretary of Energy should direct the Executive Director of the Loan Programs Office to fully develop its organizational structure by completing policies and procedures for loan monitoring and risk management. In February 2016, DOE officials provided us with evidence that they developed, revised, reviewed, and implemented the majority of their portfolio monitoring and risk management policies and procedures. However, some key work processes (e.g., Alleged Fraud, Waste, or Abuse reporting and Risk Assessment processes) are still under development, so the recommendation status remains open. The Secretary of Energy should direct the Executive Director of the Loan Programs Office to evaluate the effectiveness of DOE’s monitoring by performing the credit review, compliance, and reporting functions outlined in the 2011 policy manual for DOE’s loan programs. In February 2016, DOE officials told us that the Risk Management Division evaluates the effectiveness of DOE’s monitoring via annual internal assessments. DOE began the first of these annual assessments in October 2015 and provided GAO with updated procedures for conducting these assessments. DOE Loan Programs: Current Estimated Net Costs Include $2.2 Billion in Credit Subsidy, Plus Administrative Expenses. GAO-15-438. Washington, D.C.: April 27, 2015. DOE Loan Programs: DOE Has Made More Than $30 Billion in Loans and Guarantees and Needs to Fully Develop Its Loan Monitoring Function. GAO-14-645T. Washington, D.C.: May 30, 2014. DOE Loan Programs: DOE Should Fully Develop Its Loan Monitoring Function and Evaluate Its Effectiveness. GAO-14-367. Washington, D.C.: May 1, 2014. Federal Support for Renewable and Advanced Energy Technologies. GAO-13-514T. Washington, D.C.: April 16, 2013. Department of Energy: Status of Loan Programs. GAO-13-331R. Washington, D.C.: March 15, 2013. DOE Loan Guarantees: Further Actions Are Needed to Improve Tracking and Review of Applications. GAO-12-157. Washington, D.C.: March 12, 2012. Department of Energy: Advanced Technology Vehicle Loan Program Implementation Is Under Way, but Enhanced Technical Oversight and Performance Measures Are Needed. GAO-11-145. Washington, D.C.: February 28, 2011. Department of Energy: Further Actions Are Needed to Improve DOE’s Ability to Evaluate and Implement the Loan Guarantee Program. GAO-10-627. Washington, D.C.: July 12, 2010. Department of Energy: New Loan Guarantee Program Should Complete Activities Necessary for Effective and Accountable Program Management. GAO-08-750. Washington, D.C.: July 7, 2008. Department of Energy: Observations on Actions to Implement the New Loan Guarantee Program for Innovative Technologies. GAO-07-798T. Washington, D.C.: April 24, 2007. The Department of Energy: Key Steps Needed to Help Ensure the Success of the New Loan Guarantee Program for Innovative Technologies by Better Managing Its Financial Risk. GAO-07-339R. Washington, D.C.: February 28, 2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DOE's Loan Programs Office administers the LGP for certain renewable or innovative energy projects and the ATVM loan program for projects to produce more fuel-efficient vehicles and components. Both programs can expose the government to substantial financial risks if borrowers default. DOE considers these risks in calculating credit subsidy costs. The law requires that the credit subsidy costs of DOE loans and loan guarantees be paid for by appropriations, borrowers, or some combination of both. This testimony summarizes (1) DOE's progress in addressing GAO's prior recommendations related to the implementation and oversight of its loan programs and (2) GAO's 2015 report on the credit subsidy costs of the DOE loan programs. This statement is based on a body of work that GAO completed between February 2007 and April 2015. GAO made numerous recommendations in these reports and obtained updates from agency officials. GAO is not making any new recommendations in this testimony. The Department of Energy (DOE) has made efforts to improve the implementation and oversight of its loan programs and, to date, has taken actions to address 15 of 24 of GAO's prior related recommendations. DOE's Loan Guarantee Program (LGP), authorized by Congress in 2005, was designed to encourage certain types of energy projects (e.g., nuclear, solar, and wind generation; solar manufacturing; and energy transmission) by agreeing to reimburse lenders for the guaranteed amount of loans if the borrowers default. DOE's Advanced Technology Vehicles Manufacturing (ATVM) loan program, authorized by Congress in 2007, was designed to encourage the automotive industry to invest in technologies to produce more fuel-efficient vehicles and their components. In 2007, 2008, and 2010—which covered the early stages of the LGP—GAO made 15 recommendations to address numerous concerns where DOE had moved forward with that program before key elements were in place. For example, in its February 2007 report, GAO found that DOE's actions had focused on expediting program implementation—such as soliciting preapplications for loan guarantees—rather than ensuring the department had in place the critical policies, procedures, and mechanisms needed to better ensure the program's success. DOE has implemented 11 of the 15 recommendations. In 2011, 2012, and 2014, as Congress expanded the loan programs, GAO made 9 additional recommendations to address concerns about DOE making loans and disbursing funds without having sufficient engineering expertise, sufficient and quantifiable performance measures for assessing program progress, or a fully developed loan monitoring function, among other things. Although DOE generally agreed with most of the 9 recommendations, to date it has implemented only 4 of them. In an April 2015 report, GAO found that DOE estimated the credit subsidy costs of the loans and loan guarantees in its portfolio—that is, the total expected net cost to the government over the life of the loans—to be about $2.2 billion as of November 2014, including about $807 million for five loans on which borrowers had defaulted. The estimated $2.2 billion in credit subsidy costs was a decrease from DOE's initial estimates totaling about $4.5 billion. GAO found that changes in credit subsidy cost estimates varied by loan program and the type of technology supported by the loans and loan guarantees, among other factors. Specifically, defaults on loan guarantees for two solar manufacturing projects and one energy storage project were largely responsible for an increase in the credit subsidy cost estimate for the LGP's portfolio from $1.33 billion when the loan guarantees were issued to $1.81 billion as of November 2014. Borrowers also defaulted on two ATVM loans, but the credit subsidy cost estimate for the ATVM loan program's portfolio decreased from $3.16 billion to $404 million as of November 2014, mainly because of a significant improvement in the credit rating of one loan. In DOE's portfolio, 21 of the 30 projects had guaranteed revenue streams provided for under a long-term contract, such as a power purchase agreement, but none of the five defaulted loans supported projects with such a contract. GAO also found that administrative costs of the loan programs totaled about $312 million from fiscal year 2008 through fiscal year 2014; these costs are not included in credit subsidy costs.
For further information regarding this testimony, please contact Seto J. Bagdoyan, (202) 512-6722 or bagdoyans@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Cindy Brown Barnes (Director), Tonita Gillich (Assistant Director), Holly Dye, Erin Godtland, Joel Green, and Erin McLaughlin. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony summarizes the information contained in GAO's April 2017 report, entitled SSA Disability Benefits: Comprehensive Strategic Approach Needed to Enhance Antifraud Activities ( GAO-17-228 ). The Social Security Administration (SSA) has taken steps to establish an organizational culture and structure conducive to fraud risk management in its disability programs, but its new antifraud office is still evolving. In recent years, SSA instituted mandatory antifraud training, established a centralized antifraud office to coordinate and oversee the agency's fraud risk management activities, and communicated the importance of antifraud efforts. These actions are generally consistent with GAO's Fraud Risk Framework, a set of leading practices that can serve as a guide for program managers to use when developing antifraud efforts in a strategic way. However, SSA's new antifraud office, the Office of Anti-Fraud Programs (OAFP), faced challenges establishing itself as the coordinating body for the agency's antifraud initiatives. For example, the OAFP has had multiple acting leaders, but SSA recently appointed a permanent leader of OAFP to provide accountability for the agency's antifraud activities. SSA has taken steps to identify and address fraud risks in its disability programs, but it has not yet comprehensively assessed these fraud risks or developed a strategic approach to help ensure its antifraud activities effectively mitigate those risks. Over the last year, SSA gathered information about fraud risks, but these efforts generally have not been systematic and did not assess the likelihood, impact, or significance of all risks that were identified. SSA also has several prevention and detection activities in place to address known fraud risks in its disability programs such as fraud examination units, which review disability claims to help detect fraud perpetrated by third parties. However, SSA has not developed and documented an overall antifraud strategy that aligns its antifraud activities to its fraud risks. Leading practices call for federal program managers to conduct a fraud risk assessment and develop a strategy to address identified fraud risks. Without conducting a fraud risk assessment that aligns with leading practices and developing an antifraud strategy, SSA's disability programs may remain vulnerable to new fraud schemes, and SSA will not be able to effectively prioritize its antifraud activities. SSA monitors its antifraud activities through the OAFP and its National Anti-Fraud Committee (NAFC), which serves as an advisory board to the OAFP, but the agency does not have effective performance metrics to evaluate the effect of such activities. The OAFP has responsibility for monitoring SSA's antifraud activities and establishing performance and outcome-oriented goals for them. It collects metrics to inform reports about its antifraud initiatives, and the NAFC receives regular updates about antifraud initiatives. However, the quality of the metrics varies across initiatives and some initiatives do not have metrics. Of the 17 initiatives listed in SSA's 2015 report on antifraud initiatives, 10 had metrics that did not focus on outcomes, and 4 did not have any metrics. For example, SSA lacks a metric to help monitor the effectiveness of its fraud examination units. Leading practices in fraud risk management call for managers to monitor and evaluate antifraud initiatives with a focus on measuring outcomes. Without outcome-oriented performance metrics, SSA may not be able to evaluate its antifraud activities, review progress, and determine whether changes are necessary.
The IRC permits employers to sponsor defined contribution (DC) retirement plans and outlines requirements to which plan sponsors must adhere for tax-qualified status. With DC plans, employees have individual accounts to which employers, employees, or both make periodic contributions. DC plan benefits are based on the contributions to, and investment returns on, the individual accounts, and the employee bears the investment risk. In some types of DC plans, including 401(k), 403(b), and 457 plans and the Savings Incentive Match Plan for Employees (SIMPLE), employees may choose to make tax-deferred contributions instead of receiving the same amount as taxable salary. IRS and the Pension and Welfare Benefits Administration (PWBA) of the Department of Labor are primarily responsible for enforcing laws related to private DC plans. Under the Employee Retirement Income Security Act (ERISA) of 1974, IRS and PWBA jointly enforce standards for coverage and participation, for vesting, and for funding that, respectively, determine how plan participants become eligible to participate in benefit plans, define how participants become eligible to earn rights to benefits, and ensure that plans have sufficient assets to pay promised benefits. IRS also enforces provisions of the IRC that apply to tax-qualified pension plans, including provisions under section 401(k) of the Code. PWBA enforces ERISA’s reporting and disclosure provisions and fiduciary standards, which concern how pension plans should operate in the best interest of participants. Since the IRS 401(k) plan compliance study was undertaken in 1995, various changes have occurred in certain legal requirements for tax- qualified status that IRS examined in the study. Certain requirements IRS examined in the 401(k) study are no longer applicable to tax-qualified DC plans or have been materially modified. Also, the IRC has since been amended to permit employers to adopt SIMPLE 401(k) plans and safe- harbor design methods for 401(k) plans. SIMPLE 401(k) plans and safe- harbor designs exempt 401(k) plan sponsors from certain rules that apply generally to 401(k) plans. However, many of the statutory requirements that IRS examined in the 401(k) study have not changed materially. We were not able to assess the extent to which changes in relevant pension laws and 401(k) plan designs have affected the overall prevalence and incidence of noncompliance among the population of 401(k) plans (see app. II for more detail on changes in relevant pension laws since the study was published). IRS groups violations of the IRC and corresponding regulations that must be satisfied to achieve tax-qualified status into four categories. Plan Document failure occurs when the language of the plan documents does not comply with provisions of the tax Code. Operational failure occurs when the implementation and operation of a plan does not comply with provisions of the tax Code. Demographic failure occurs when a plan fails to comply with fundamental nondiscrimination requirements faced by all tax-qualified plans. Employer Eligibility failure occurs when an employer that is not allowed to establish a section 401(k) plan, such as a state or local government, adopts such a plan. IRS issued Revenue Procedure 2001-17 in February 2001 to establish its current framework for promoting the compliance of tax-qualified pension plans with the applicable requirements of the IRC. This framework has evolved since IRS first introduced voluntary compliance procedures in the early 1990s. To promote compliance, IRS developed the Self-Correction Program (SCP), the Voluntary Correction Program (VCP), and the Audit Correction Agreement Program (Audit CAP). SCP is used to correct insignificant Operational failures at any time, without fee or sanction and without IRS supervision. VCP allows plan sponsors to voluntarily report and correct all types of qualification failures with IRS approval. Upon receiving IRS approval of the proposed correction measures, plan sponsors must implement the specified corrective measures and pay a compliance fee, one that is, on average, much less than the financial sanctions assessed for violations identified by IRS audits. The Audit CAP allows plan sponsors to correct all types of qualification failures that IRS identifies through formal audits. Under Audit CAP, plan sponsors must correct all qualification failures and pay a negotiated financial sanction commensurate with the nature, extent, and severity of the failures. If IRS and the plan sponsor do not reach an agreement with respect to the correction of the failure(s), IRS can pursue disqualification of the plan for tax purposes. All IRS audits of tax-qualified employer-sponsored plans are carried out under one of two audit programs, the Examination Program or the Compliance Research Program. The Examination Program includes a wide range of compliance-related activities. These activities include auditing based on referrals and computer targeting, training for IRS examiners who perform plan audits, and reviewing closed audit cases. The Compliance Research Program sponsors studies, such as the 401(k) study, to identify and monitor noncompliance among private plans. Compliance studies are based on plan audits, which IRS conducts so that it can collect study data. IRS is in various stages of planning and conducting compliance research on several types of private pensions, and intends to use data from these studies to develop more effective enforcement and compliance activities. However, plan audits conducted under the auspices of the Compliance Research Program represent a small proportion of IRS’s total audit activity. In fiscal year 2001, IRS plans allocated a total of 1,845 staff days to audits for the Compliance Research Program, compared with 33,734 staff days allocated to audits for the Examination Program. For fiscal year 2002, IRS plans to increase the number of staff days related to compliance research activities, but direct examination activities will still constitute the majority of IRS’s audit work. Audits of employer pension plans are initiated when the IRS selects for audit a plan return, or form 5500 filing, from the Return Inventory Classification System. A notification letter is sent to the plan sponsor with a request for information that the examiner needs to complete the audit. IRS examiners complete a process that includes interviewing the plan benefits administrator, reviewing plan documents, and holding a closing conference to discuss the results of the audit with the plan sponsor. If an examiner finds a qualification issue, or a failure that can potentially disqualify a plan’s tax-exempt status, the examiner can resolve the violation through correction under IRS’s SCP (Self-Correction Program disposal) or enter into a closing agreement with the plan sponsor through the Audit CAP (closing agreement disposal). Both of these audit disposal methods indicate that the examiner identified a violation that could potentially disqualify the plan, but the Audit CAP closing represents a more significant disposal than correction under the SCP. IRS audited a sample of 401(k) plans to collect data and estimate noncompliance with certain requirements of the Internal Revenue Code. IRS examiners were provided with a questionnaire to obtain information on the compliance of these 401(k) plans after conducting the audits. Once the data were gathered, IRS identified 73 study questions that could indicate whether or not a plan was in compliance. IRS data analyses produced estimates on the number of plans that failed to comply in one or more instances, based on the answers to these 73 compliance indicators. IRS’s original estimates on noncompliance decreased after some adjustments were made to its initial analysis. In selecting a sample of plans to study, IRS analyzed a database that it maintains on the population of tax-qualified plans. This database contains records of form 5500 returns that plan sponsors file with the IRS and Department of Labor, and IRS identified pension plans that had reported a 401(k) feature for the 1993 plan year. IRS identified 143,535 plans that reported a 401(k) plan feature, but excluded about 470 plans prior to sample selection, because these plans had no participants at the end of 1993 or had recently been audited by IRS. This step reduced the population to 142,768 401(k) plans from which IRS would select plans to study. These remaining 401(k) plans were subdivided evenly by size into three groups labeled small, medium, and large plans. To create a sample of 525 plans, IRS randomly drew equal numbers of plans from these small, medium, and large categories. The method that IRS used to create the sample of 525 401(k) plans from these categories was basically equivalent to drawing a simple random sample in which each plan had an equal probability of selection. However, before drawing 175 plans from each of these three groups, IRS carved out the 25 largest plans from the large-plan category and put these plans into a separate group that it called “super- large plans”; this super large category was selected as a 100 percent sample of the largest 401(k) plans. Taken together, IRS’s sampling method was intended to produce a representative sample from, and reliable results for, the 401(k) plan population. The sample of 550 plans was assigned to IRS key district offices, where study coordinators were responsible for selecting the plan’s 1994 form 5500 return and assigning the plan to an IRS examiner for audit. IRS examiners were provided a questionnaire to obtain information on the compliance of the 401(k) plans and were instructed to complete the questionnaire after auditing each plan in the study. IRS examiners’ answers to the study questions were based on their plan audits. The questionnaire, or check sheet, that IRS used for its study was originally developed as part of a broad information-gathering project and included 254 questions to obtain information on 401(k) plan characteristics, design features, and compliance with certain requirements of the IRC. IRS used this available questionnaire to collect data relevant to its study objective of measuring 401(k) plan compliance. Once the study questionnaires were completed, they were sent to IRS Employee Plans headquarters for review and data analysis. Prior to its data analysis of 401(k) plan noncompliance, IRS reviewed all 550 questionnaires and excluded 78 of them from the analysis because the study questionnaire contained insufficient data or because the plan erroneously reported a 401(k) plan feature. Once the data on the 472 remaining plans were gathered in Employee Plans headquarters, IRS analysts identified 73 out of the 254 questions on the questionnaire that they believed could indicate whether or not a plan was in compliance with certain requirements. That is, IRS identified the study questions it expected would provide information that a plan was either in compliance or not in compliance with certain requirements. These 73 “compliance indicators” became the focus of IRS’s analysis in identifying and summarizing the prevalence and types of noncompliance among 401(k) plans. The study questions that related to compliance issues included a range of items concerning certain statutory requirements that apply to all qualified defined contribution plans and concerning legal requirements that apply to qualified 401(k) plans. For example, the compliance indicators that IRS analyzed included items concerning employer contribution requirements, coverage rules, nondiscrimination provisions, and limits on employee contributions in addition to other important rules and requirements that qualified plans must satisfy. IRS data analyses identified the number of plans that failed to comply with one or more of their compliance indicators. The IRS study reported that 44 percent of the 472 plans remaining in the study had one or more instances of noncompliance with certain requirements that IRS examined; the other 56 percent of the plans were found to have no violations. These percentages varied slightly according to plan size category. The study reports that 41 percent of small plans, 47 percent of medium plans, and 44 percent of large and super-large plans had one or more instances of noncompliance. IRS also used its noncompliance indicator data to estimate, by calculating the number of times specific violations were identified, the frequency with which these violations occurred in its study sample. IRS analyses counted 251 instances of noncompliance that it categorized by requirements to which tax-qualified 401(k) plans should adhere. In total, the study publication uses 16 categories, such as nondiscrimination, loans, coverage, vesting, and participation, to report on various types of noncompliance that IRS found among the 401(k) plans in the study. For each of the compliance categories, the study publication reports the total number of violations that occurred. According to the study report, the total number of violations for each compliance category cannot be correlated to the number of plans containing these violations because some plans may have contained more than one violation within a category. As a result, the study publication does not show how many plans had more than one instance of noncompliance in a single category or how many plans had more than one type of compliance violation. The analysis did not attempt to distinguish instances of noncompliance according to the severity of the violation. For the plans that had one or more instances of noncompliance, no study questions captured information on the insignificance or significance of the violations that IRS identified. Nor did the questionnaire include specific items on the number of participants (if any) affected and the amount of assets (if any) that were represented by the noncompliance errors IRS found. The questionnaire did contain items on the total number of plan participants and assets, but IRS did not analyze these data in relation to its findings on noncompliance. IRS’s original estimates on 401(k) plan noncompliance decreased after IRS made some adjustments to its initial analysis of compliance indicator data. Initially, IRS used its noncompliance indicator data to produce estimates of 401(k) plan noncompliance. For some plans, however, IRS found problems with the data for specific compliance indicators. During its analysis, IRS told us that it sometimes discovered instances in which data for certain compliance indicators were found to be either inaccurate or insufficient to determine whether an instance of noncompliance had occurred. However, IRS’s discovery of discrepancies in the data was not the result of systematically reviewing all the compliance indicator data for each plan in the study. Instead, in some of these instances where IRS discovered problems with its compliance indicator data, these data were compared with information that IRS routinely captures about the results of their plan audits. According to IRS, analysts who worked on the data analysis met occasionally to review the data recorded on the study questionnaires and to determine whether the compliance study data were sufficient to identify noncompliance. After comparing the compliance indicator data with the other information that IRS collects on their audits of these 401(k) plans, the analysts made adjustments to the compliance indicator data. However, IRS analysts sometimes adjusted the data solely on the basis of their assessments that specific compliance indicators were not reliable or sufficient to determine whether or not a violation had occurred. Because these adjustments were not based on a systematic review of the accuracy and sufficiency of the data, we could not determine whether the adjustments that IRS made resolved all potential problems with its compliance indicator data. These adjustments changed noncompliant plans to compliant, and vice versa. For example, IRS analysts determined that some 401(k) plans with at least one violation of certain nondiscrimination requirements were found to be fully compliant once the additional information was included in the analysis. Also, some plans that had been included in the original estimate of plans with no compliance errors were determined to have at least one instance of noncompliance when IRS used this extra information to inform its analysis. When IRS used these adjustments to supplement the analyses it had performed, the total number of compliance violations decreased. At one point during its data analysis, IRS estimated that 298 total instances of noncompliance had occurred among the plans in the study. However, IRS’s final estimate of the total number of compliance errors was revised downward to 251. As a result of these changes, IRS’s estimate of the percentage of 401(k) plans with one or more instances of noncompliance decreased from 56 percent to 44 percent. The IRS study did not, in general, provide accurate estimates of the overall prevalence and types of noncompliance among 401(k) plans. IRS’s estimates of noncompliance among 401(k) plans were inaccurate primarily because only 27 of the 73 questions that it identified as compliance indicators conclusively demonstrated a plan’s noncompliance. Also, the reported findings could not be generalized to the broader population of all 401(k) plans because the analysis did not take into account the sample weights. More than half of the study questions that IRS identified to analyze 401(k) plan compliance were unable to conclusively demonstrate noncompliance. We asked IRS analysts involved in the study’s data analysis to evaluate the 73 questions that were selected as compliance indicators and determine whether these questions could definitively demonstrate a compliance violation. In evaluating each of the compliance indicators, IRS assessed whether the answers to these questions would provide information that was relevant to, or suggestive of, noncompliance or in fact demonstrated an instance of noncompliance. As a result of this evaluation, the IRS analysts identified only 27 questions that could definitively demonstrate noncompliance. In contrast, IRS determined that the remaining 46 questions were not sufficient by themselves to demonstrate noncompliance because potential problems rendered these indicators less conclusive. Although a positive response was generally sufficient to demonstrate compliance, the IRS analysts whom we spoke with told us that additional information would be needed to determine whether or not negative answers to these questions conclusively indicated noncompliance. Consequently, the 44 percent of plans reported to have one or more compliance violations is at best an upper-bound estimate of the extent of noncompliance found in this study because the reported results are not limited to those items with sufficient information to identify noncompliance. IRS’s compliance indicators were not initially developed to specifically identify and substantiate noncompliance among 401(k) plans and the answers were not validated as accurately demonstrating noncompliance. Instead of formulating study questions that were directly relevant and sufficient to demonstrate noncompliance, IRS used an already available questionnaire that had been developed as part of a broad information- gathering project. This broadly scoped research project had been revised to address the narrower objective of 401(k) plan compliance. Only after administering the check sheet and collecting the data did IRS identify the study questions that it expected to demonstrate noncompliance. As a result, most of the 254 questions on the questionnaire were not directly relevant to the study objective of estimating noncompliance among 401(k) plans. Also, some of the answers expected to demonstrate noncompliance from IRS’s analysis of noncompliance indicator data were found to be suggestive, rather than demonstrative, of noncompliance. Although IRS, to help ensure ease in recording the answers, pretested the software that its examiners used to complete the study questionnaires, it did not pretest the study questions. Because IRS did not pretest the questionnaire for the accuracy and appropriateness of the answers, problems with the questionnaire were not identified or remedied before the data were collected. For example, answers might have more accurately reflected the types of information that were being sought if a preliminary evaluation and pretesting of the 73 compliance indicators had been used to improve the wording of the questions and the instructions provided to the examiners collecting the information. We found that the accuracy of IRS estimates was also hampered by the lack of adequate training for examiners who filled out the study questionnaires after completing the audits. Each field office sent representatives to a kickoff conference that provided training for the 401(k) study. However, the training did not address which study questions would be used to distinguish compliance from noncompliance, because IRS identified these questions after the data were collected. Additionally, IRS told us that uniform audit standards were not developed to guide examiners in conducting the audits and in using the audit information to answer the study questions. As a result, an IRS analyst responsible for the data analysis stated that the 401(k) plan audits were not uniform and that some of the data were not collected consistently. Further, the representatives trained were not the examiners expected to conduct the audits and complete the subsequent questionnaires but rather the field office representatives charged with managing the local data collection efforts and transmitting data to headquarters for analysis. However, the field office representatives did not receive information regarding which questions would be used to distinguish compliance from noncompliance and thus could not relay this information to the auditing examiners. Despite the discovery of inaccurate and inconsistent answers, IRS did not systematically verify the accuracy of all the data analyzed. Instead, the IRS analyst who summarized the study data told us that he made some judgmental corrections to obviously incorrect or inconsistent answers rather than ordering the relevant closed case file or contacting the relevant examiner to obtain valid and accurate answers. As a result, some answers to certain study questions were not used in IRS’s final estimates of 401(k) plan noncompliance and others were used but judgmentally adjusted. In addition, the use of additional information to revise estimates of noncompliance was not well documented. We could not verify the revisions in IRS estimates, because IRS was not able to provide us with a single complete data file to check whether its reclassifications of plans as compliant or noncompliant were accurate. More complete documentation would have helped IRS ensure that it accurately estimated the proportion of plans that had one or more compliance errors and the frequency of occurrence for specific violations. Not all of the IRS study findings could be generalized to the broader population of all 401(k) plans, a fact that makes them less useful. To the extent that findings were reported separately for the small, medium, large, or super-large groupings, these results are reliable estimates for compliance errors of all plans in such groups (other data issues notwithstanding). For example, the report estimates that 53 percent of the 162 medium plans audited had no violations. This figure can also be used as an estimate of the percentage of medium-size plans in the broader population that had no violations (other data issues notwithstanding). However, in cases where compliance information was aggregated to include results from more than one group, such results are not reliable estimates for compliance errors of other plans in these groups. IRS sampled all of the super-large 401(k) plans to ensure their inclusion in the study. Because the super-large plans were a 100 percent sample and the plans sampled in the other plan-size categories each represented about 1,000 plans from the total population, combining sample results for these groups without weighting them gives the super-large plans more influence in the final answer than is warranted by their representation in the total 401(k) plan population. Proper weighting of all sample cases is necessary to make tabulations and other estimates that can be generalized to the broader 401(k) population. In some cases, information from large and super-large plans was combined for reporting. In other cases, information was combined for all plans studied. For example, the report estimates that 56 percent of the 171 large and super-large plans studied had no compliance errors and that 56 percent of all 472 plans studied had no errors. In these cases where IRS has combined information for all plans in the study or for two plan-size categories, the reported percentages do not represent the percentages in the corresponding population of 401(k) plans. If IRS’s analysis had accounted for its sampling methodology, it is possible that IRS would have produced estimates similar to the reported results of the 401(k) study because the reported estimates of the proportion of plans with one or more instances of noncompliance were similar across plan- size categories. Furthermore, the 401(k) study findings cannot be used as estimates of noncompliance among the current population of 401(k) plans. To assess whether the 401(k) study results reflect the level and types of noncompliance among the current population of 401(k) plans, the data that support the published results would need further analysis to account for changes that have occurred in relevant pension laws since the study was undertaken. Although the 401(k) study publication describes changes to relevant pension laws that occurred during the course of the study, it is not possible to determine how these changes have affected noncompliance among 401(k) plans by simply examining the study findings. Also, changes have occurred in relevant pension laws since the study was published (see app. II for a description of changes in relevant pension laws since the study was published). IRS is currently planning and conducting research on several types of private pension plans to determine the prevalence and types of noncompliance. To obtain information on the extent and types of noncompliance among these plans, IRS plans to conduct compliance studies similar to the one conducted on 401(k) pension plans. After implementing initiatives to improve compliance, IRS plans to once again collect and analyze similar compliance data to determine the effectiveness of its initiatives. In its ongoing research efforts, IRS is adopting lessons from its prior compliance study. IRS is currently planning and conducting compliance research on several types of private pension plans. According to IRS officials who are involved in IRS enforcement and audit activities, compliance research will be used to help plan and implement initiatives that address compliance issues among various types of plans. IRS also told us that compliance research initiatives could be useful sources of information for plan sponsors and administrators, who are encouraged by IRS to use voluntary compliance procedures in identifying and remedying noncompliance. In addition, IRS uses this information to determine issues that are appropriate for published guidance. Ongoing compliance research is being conducted according to an overall strategy that IRS calls its market segment approach, developed to identify compliance issues among various types of tax-qualified pensions that employers sponsor. This market segment approach is being used by IRS to estimate the level and types of noncompliance among specific types of pension plans and to measure the impact of initiatives that IRS devises to address noncompliance. IRS has selected specific types of private plans for ongoing compliance research, including 401(k) plans, sections 403(b) and 457 plans, and multiemployer plans. IRS chose these plan types for several reasons, such as their prevalence, the significant degree of noncompliance known from past audits of these plan types, and/or the need to develop experience in conducting audits and compliance research. According to IRS, compliance studies for these plan types are in various stages of development and implementation. In the future, IRS plans to expand its compliance research and initiative development to other types of private plans. IRS officials whom we spoke with said that these compliance studies will be similar in overall design to the prior 401(k) study. For the various plan types that IRS has identified, IRS will select plans to study through sampling or some other mechanism. A study questionnaire will be developed to capture information about compliance with certain requirements. IRS examiners will audit plans that have been selected for the study and will answer study questions on the basis of the audits. The data that IRS collects will be analyzed, and the results will be used to estimate the extent and types of noncompliance among the plans in these studies. Study findings will be used by IRS as baseline information about noncompliance among the plan types selected for compliance research. After implementing initiatives designed to improve the compliance of the plan types that were selected for compliance research, IRS will conduct a follow-up compliance study to assess the impact of its compliance activities and specific initiatives. The follow-up studies will be designed to collect data that permit a comparison with baseline data from the initial studies of the level and types of noncompliance. IRS data analysis and examination of results from both the initial and follow-up studies will help IRS determine whether overall compliance has improved. IRS staff told us that as the results of compliance studies become available, IRS will be able to make better assessments of how to use compliance study data. For example, IRS has conducted compliance research on 403(b) plans that it has used to develop specific outreach and education initiatives, including a Web site with information on noncompliance and speaking points for IRS examiners who meet with plan sponsors and administrators. In addition, IRS plans to use its compliance studies to improve the way it conducts audits. For example, IRS intends to use the results of compliance studies to develop more standardized audit guidelines and targeted audits to better identify compliance issues among, and to limit plan audits to those issues relevant to, specific types of plans. IRS is adopting lessons learned from its prior compliance study to enhance the quality and usefulness of ongoing and future compliance research initiatives. Through our review of IRS work plans and interviews with IRS officials, we identified several aspects of current and future IRS compliance studies that are improvements on the prior 401(k) study. For example, IRS’s current approach to planning compliance research has become more comprehensive. Unlike the 1995 401(k) study, IRS work plans indicate that “compliance planning groups” have been assembled for each of the four plan segments on which IRS is conducting compliance research. These groups, which include key stakeholders from across the agency with expertise in various aspects of pension plan compliance, are being used to help IRS formulate comprehensive plans for conducting upcoming compliance research. According to IRS officials whom we spoke with, IRS will obtain guidance and input from its Research and Analysis group to assist with the design and implementation of its compliance studies. We identified other aspects of compliance studies, in addition to better planning, that improve on the prior 401(k) study. In conducting upcoming studies, IRS told us that it plans to develop and provide enhanced training for examiners who are responsible for auditing the plans and recording the study information. For example, IRS plans to conduct a training session for IRS examiners who will be assigned to conduct 401(k) plan audits for ongoing compliance research. IRS officials told us that examiners would receive training on the study questionnaires and in how to answer the study questions. In addition, part of the training that IRS intends to provide for 401(k) plan studies will be based on standardized guidelines that IRS has developed for collecting information from 401(k) plan audits. IRS has developed standardized audit guidelines for each of the plan types that the agency has selected for ongoing compliance research. According to IRS, these guidelines will help IRS examiners, including examiners involved in compliance studies, collect and record information consistently and accurately. IRS told us that it intends to incorporate other improvements into its upcoming 401(k) plan compliance studies. For example, IRS said that examiners who participate in upcoming IRS compliance studies will have a role in developing the questionnaires used to collect compliance study data, and IRS will pretest compliance study questionnaires to help determine their usefulness and the accuracy of the information that they are intended to collect. Also, IRS is developing automated tools that its examiners will use to record answers to compliance study questions. Automated tools that IRS examiners can use to collect information during the course of an audit have been developed for the 401(k) plans but are still in development for other plan segments. According to IRS officials, these automated tools will help IRS produce work papers to document and verify its compliance study data. Compliance research studies could play an integral role in IRS’s efforts to ensure that tax-qualified pension plans adhere to applicable laws and regulations. The findings from such studies can provide data on the prevalence and types of noncompliance among pension plans, helping IRS shape its enforcement efforts. For example, IRS can use compliance study findings to identify key aspects of noncompliance among specific types of plans and develop targeted audits and other activities to address compliance issues. In recent years, IRS enforcement efforts have placed greater emphasis on voluntary correction procedures—that is, encouraging plan sponsors to correct violations that are discovered. Information on noncompliance that is useful and accurate could help improve targeting for audits and enhance voluntary compliance initiatives that assist plan sponsors in discovering and making such corrections. Compliance research can also measure the impact of such efforts to determine whether they are effective. The more accurate the findings from compliance studies, the better able IRS is to ensure that plans are operating in accordance with applicable requirements, so that participants receive the coverage and benefits to which they are entitled. Compliance study findings can help IRS tailor its initiatives to identify, monitor, and address the most essential aspects of noncompliance among specific types of pension plans and measure whether its activities are effective in promoting compliance among plan sponsors. IRS recognizes the need to improve the way it conducts compliance studies and is in the process of implementing specific steps to improve aspects of planning and conducting these studies. Since IRS compliance research is focused on other types of plans besides 401(k) plans, it is important that IRS consistently implement these steps throughout its ongoing and future compliance research initiatives. Several shortcomings of the 1995 IRS 401(k) study undercut its effectiveness in meeting IRS’s research objective of estimating the extent and types of noncompliance among 401(k) plans. These shortcomings cut across important components of the 401(k) study, including questionnaire design, data collection, and data analysis. Whether these and other elements of research are designed and carried-out in a sound manner help determine the effectiveness of research studies in meeting their objectives. For example, the 1995 401(k) study questions were not pretested to determine whether they would have produced demonstrative data on noncompliance, and examiners who completed the study questionnaires were not provided with training on answering the questions in an accurate and uniform manner. To ensure the accuracy of its findings, IRS will need to build steps into its compliance studies that improve the accuracy and usefulness of the data that are collected, analyzed, and reported. Additionally, documenting a research study can help produce evidence that supports the answers to the research questions. Insufficient documentation limits the perceived accuracy and the usefulness of a research study. To ensure the quality and usefulness of ongoing and future compliance studies in providing information that enhances IRS’s efforts to promote compliance among private pension plans, IRS should take steps to improve how it conducts compliance study research. These steps, in addition to the agency’s current efforts to improve the quality of compliance studies, should be incorporated into all planned compliance studies. Accordingly, we are making three recommendations to the IRS Commissioner for all future compliance studies. We recommend that IRS pretest compliance study questionnaires to obtain information on the usefulness and accuracy of the answers in achieving IRS’s research objective. We also recommend that IRS provide uniform and comprehensive training to examiners who participate in compliance studies, so that they will know what information is needed to answer the study questions and can collect this information consistently and accurately. Finally, we recommend that IRS maintain sufficient written or electronic documentation to enable it to validate and verify the results of compliance studies with evidence; this would allow IRS to explain the methods used to analyze study data and arrive at findings. We provided a draft of the report to the Commissioner of the IRS and the Department of the Treasury. IRS generally agreed with our findings, conclusions, and recommendations. In its letter, IRS notes that has incorporated our recommendations in a current compliance study on 401(k) plans. We agree that IRS has taken specific steps to improve its current 401(k) plan compliance study and describe these steps in our report. In addition to the current 401(k) study, IRS should also implement our recommendations throughout its current and upcoming compliance study initiatives on 401(k) and other types of pension plans. The IRS also provided us with technical comments, which we incorporated as appropriate. IRS’s comments are included in Appendix III. We are sending copies of this report to the Honorable Paul H. O’Neill, Secretary of the Treasury, the Honorable Charles O. Rossotti, Commissioner of the IRS, and other interested parties. We will also make copies available to others on request. If you or your staff have any questions concerning this report, please call me at (202) 512-7215. Key contributors are listed in appendix III. To determine what IRS did to estimate the prevalence and types of 401(k) plan noncompliance with the requirements of the Internal Revenue Code, we reviewed the final 401(k) compliance study report that IRS posted to its Web site. In addition, we reviewed the initial and interim draft reports that we received from IRS, as well as study-related work papers, which documented the design, implementation, and analysis components of the study. We also interviewed IRS officials in the Employee Plans area of IRS’s Tax Exempt and Government Entities Division, including officials in the Office of Examinations and the Office of Education and Outreach who were responsible for conducting and disseminating compliance research on private plans to obtain information about how IRS designed and conducted the study. Our work focused on identifying and summarizing the major components of the 401(k) study in relation to key elements of research study methodology including the study objective, study design, sample selection, questionnaire design, data collection, and data analysis. Our evaluation of IRS’s estimates of the prevalence and types of noncompliance was limited because IRS was unable to provide us with a complete data set or documentation that supports the final study results. As a result, we could not assess the usefulness of the study in relation to compliance among the broader population of 401(k) plans because we did not have data or other documentation that supported IRS estimates on specific types of noncompliance. Without this information, we could not make the appropriate sample weight adjustments to assess IRS estimates of the overall prevalence of noncompliance among all plans in the study or within specific plan size categories. Furthermore, the lack of a complete data set or comprehensive documentation supporting the published results limited our ability to reliably assess revisions in IRS estimates of the proportion of plans that had one or more compliance errors and the frequency with which specific types of errors occurred among the plans in the study. Additionally, the inability of IRS to provide closed case file information on audited plans limited our ability to assess the reliability of the data collected for analysis. In light of these limitations, we elected to assess to what extent the IRS study provides accurate estimates on 401(k) plan noncompliance by evaluating, in relation to published guidance for conducting research, how the study was conducted. We evaluated the IRS study using a series of brochures on surveys published by the American Statistical Association (ASA) and published GAO guidance on methodology and program evaluation. These published guidelines address important elements of research studies, such as sampling and questionnaire design. Our evaluation examined and compared the sampling methodology, the questionnaire development, the data collection process, and the data analysis on which the IRS report is based with ASA and GAO guidelines on each of these elements. To examine and compare elements of the IRS study with published guidance, we collected and reviewed relevant documents such as draft study reports, the questionnaire check sheet, and other working papers made available by IRS. We also received and examined many electronic data files pertaining to the 401(k) study. Additionally, we interviewed IRS analysts who were responsible for conducting the data analysis and the IRS statistician who assisted with selecting the stratified random sample. To describe IRS’s current efforts in planning and conducting compliance research on private pension plans, we reviewed draft work plans for IRS’s ongoing and future compliance research initiatives, including plans for an upcoming 401(k) plan compliance study; discussed lessons learned from the prior 401(k) study with IRS officials and analysts involved in compliance research initiatives; interviewed IRS officials in the Division of Tax Exempt and Government Entities, Employee Plans office to discuss the role of compliance research in IRS efforts to promote compliance among plan sponsors; reviewed official IRS guidance on agency procedures for identifying and remedying compliance violations; and discussed how compliance research initiatives can inform IRS’ voluntary compliance activities with IRS officials. We assessed IRS work plans and our discussions with IRS to identify and summarize the agency’s overall plans for ongoing and future compliance research, including the role of compliance studies. As part of our work, we identified lessons learned from the previous 401(k) study that IRS has adopted in its plans to design and conduct compliance research initiatives. The IRS 401(k) study publication provides information on changes in relevant laws that occurred while the study was performed. The summary information that IRS includes in its published study report describes changes in relevant pension laws since the study was conducted and is pertinent up to the time at which the profile was posted on IRS Web site. This appendix summarizes and describes key changes in laws that apply to tax-qualified 401(k) plans that have occurred since the release of the published 401(k) study report, mostly changes arising from the Economic Growth and Tax Relief Reconciliation Act (EGTRRA) as they relate to violation categories identified in the study. Our summary of recent changes in applicable laws is grouped by the compliance categories that IRS used to present its 401(k) study results. Neither the IRS 401(k) study report nor this appendix should be regarded as a comprehensive explanation of the laws that relate to tax-qualified pension plans in general and tax-qualified 401(k) plans in particular. While this appendix provides context where necessary to understand how EGTRRA provisions change certain pension laws, it does not provide a history or complete description of the purpose and nature of the Internal Revenue Code (IRC) requirements that EGTRRA changes. The published 401(k) study report provides more in-depth description of the purpose and requirements of the specific IRC provisions that IRS examined as part of its 401(k) study. A. Distributions Eligible For Rollover Treatment EGTRRA section 636(b) mandates that any distribution made upon hardship of an employee will not be an eligible rollover distribution. Thus, no assets distributed to an employee on account of his or her hardship will be eligible for direct rollover to another plan or individual retirement account (IRA). Such distributions will therefore be subject to the withholding rules applicable to distributions that are not eligible rollover distributions. Section 401(a)(31) of the Code provides that participants receiving an eligible rollover distribution must have the option to have the distribution transferred in the form of a direct rollover to another eligible retirement plan. If an eligible rollover distribution is not transferred by a direct rollover, the distribution is subject to withholding at a 20% rate, under section 3405(c)(1). Regulations under section 401(k) currently provide that elective (pre-tax) deferrals under a 401(k) plan can, if the plan provides, be distributed (without earnings) in the event of the financial hardship of the employee. The regulations provide that a distribution is made on account of hardship only if the distribution is made on account of an immediate and heavy financial need of the employee and the distribution is necessary to satisfy such financial need. Under pre-EGTRRA law, hardship withdrawals of elective deferral amounts under 401(k) plans were not eligible for rollover, while other types of hardship distributions (e.g., employer matching contributions distributed on account of hardship) were eligible rollover distributions. Different withholding rules apply to eligible rollover distributions than to distributions that are not eligible rollover distributions. EGTTRA section 641(c) also provides for an expanded explanation to recipients of rollover distributions. This provision requires that the rollover notice include a description of the provisions under which distributions from the eligible retirement plan receiving the distribution may be subject to restrictions and tax consequences which are different from those applicable to the plan making the distribution. Effective for distributions after December 31, 2001, EGTRRA section 641 allows rollovers among 401(k) plans, 403(b) plans, or governmental section 457 plans. EGTRRA section 657 mandates that unless the participant elects otherwise, any eligible rollover distribution in excess of $1,000 that may be distributed without the participant’s consent be automatically rolled over to a designated IRA. This change applies to distributions that occur after the Department of Labor issues final regulations implementing section 657. Section 642(a) of EGTRRA provides that an eligible rollover distribution from an IRA may be rolled over to another IRA or an eligible retirement plan as long as the amount is transferred no later than 60 days after the date the distribution was received. Section 642(b)(3) of EGTRRA provides that a distribution from a Savings Incentive Match Plan for Employees (SIMPLE) IRA may also be rolled over to another SIMPLE IRA. Under pre-EGTRRA law, elective (pre-tax) deferrals may not be distributed earlier than one of the events described in section 401(k)(2)(B) or section 401(k)(10). EGTRRA modifies these rules as they apply in the case of a corporate transaction, such as an asset or stock sale, that results in employees of the seller going to work for the buyer. Pre-EGTRRA law permits distribution in the case of certain types of transactions but not others. EGTRRA section 646 amends section 401(k)(2)(B) by replacing “separation from service” with the more lenient standard of “severance from employment.” This generally will permit distributions to employees who move from seller to buyer in connection with a corporate transaction, unless corresponding assets of the seller’s plan move as well. Section 646 of EGTRRA also makes conforming changes to section 401(k)(10). The amendments made by section 646 apply to distributions made after December 31, 2001. B. Nondiscrimination (ADP/ACP) EGTRRA section 666 repeals the multiple use test effective for plan years beginning after December 31, 2001. The multiple use test occurs where a 401(k) plan is subject to both the ADP and ACP tests and both tests can only be satisfied using the alternative limitations of those tests described under section 401(k)(3) and section 401(m)(2) (the 2 percentage point limit or the 200 percent limit). The purpose of the multiple use test is to prevent the multiple use of the more generous alternatives for meeting both the ACP and the ADP test when certain employees are eligible under both a section 401(k) plan and a section 401(m) plan. EGTRRA section 612 repeals the rule prohibiting loans to sole proprietors, partners who own more than 10% of the partnership, and shareholders of S corporations who own more than 5% of the S corporation effective for years beginning after December 31, 2001. D. Contingent Benefits – No change. EGTRRA section 636(a) directs that the regulations under section 401(k) be revised to permit a participant who receives a hardship distribution to resume elective (pre-tax) deferrals 6 months, instead of 12 months, after receiving a hardship distribution. This change is effective for years beginning after December 31, 2001. EGTRRA section 613 generally simplifies several elements of top-heavy testing and their application. First, it simplifies the definition of key employee, so that the term includes only individuals who during the year in question or the immediately preceding year were officers earning over $130,000 (adjusted for cost of living increases), 5% owners, or 1% owners earning more than $150,000. Second, it specifies that in determining whether or not a plan is top heavy, only distributions made within the preceding 1 year, rather than the preceding 5 years (except for in-service distributions, for which the 5-year rule will continue to apply) must be added. Third, it requires that matching contributions to a top-heavy plan be counted in determining whether nonkey employees have received the required minimum benefit. Last, it states that certain plans meeting safe- harbor requirements applicable to the nondiscrimination rules regarding 401(k) and matching contributions will automatically be deemed to not be top heavy, and frozen defined benefit plans (with respect to which there are no current benefit accruals for current or former key employees) will be exempt from certain of the minimum accrual requirements. The new rules are effective for years beginning after December 31, 2001. G. Coverage – EGTRRA section 664 directs that the regulations under Code section 410(b) be revised to allow a 401(k) plan to treat as excludable employees the employees of a Code Section 501(c)(3) entity who are eligible for a Code section 403(b) arrangement provided that: (1) no employee of the 501(c)(3) entity is eligible to participate in a 401(k) plan; and (2) at least 95 percent of the employees who are not employees of the 501(c)(3) entity are eligible to participate in the 401(k) plan. This change is effective January 1, 1997. Under EGTRRA section 611, the $35,000 limit on combined employer and employee contributions for defined contribution plans is raised to $40,000 (indexed for the cost of living in $1,000 increments). The 25% of compensation limit is increased to 100% of compensation. Therefore, the new 415(c) limit will be the lesser of (1) 100% of compensation or (2) $40,000 (adjusted for cost of living increases). This provision is effective for years beginning after December 31, 2001. Catch-up contributions are not taken into account in applying the $40,000 limit. Section 611(d) of EGTRRA also increases the limit on elective contribution under Code section 402(g) from $10,500 in 2001 to $11,000 in 2002; $12,000 in 2003; $13,000 in 2004; $14,000 in 2005; and $15,000 in 2006. The limit is adjusted for increases in the cost of living for years after 2006 in $500 increments. Section 631 of EGTRRA amends Code section 414 and provides that the otherwise applicable dollar limit on elective deferrals under a 401(k) plan, 403(b) plan, SEP, or SIMPLE plan, or deferrals under a governmental 457 plan will be increased for individuals who have attained age 50 before the end of the plan year, and who have otherwise already made the maximum permitted deferral under the Code or the plan or arrangement. The additional or “catch-up” contribution amount under a 401(k) plan, 403(b) plan or 457 plan is $1,000 for 2002, $2,000 for 2003, $3,000 for 2004, $4,000 for 2005, and $5,000 for 2006 and thereafter. The limit is adjusted for cost of living increases for years after 2006 in $500 increments. These additional contributions are for individuals who are age 50 and or older and such contributions will not violate the nondiscrimination, top-heavy or 415 requirements. Under Code section 401(a)(17), for years beginning after December 31, 2001, the amount of compensation that may be taken into account under a 401(k) plan is also increased from $150,000 (adjusted for cost of living increases to $170,000 in 2001) to $200,000. This limit is adjusted for cost of living increases in $5,000 increments. I. Nondiscrimination under Section 401(a)(4) – No change. Under EGTRA section 633, employer matching contributions must vest at least as rapidly as under one of two new vesting schedules. These schedules provide for faster vesting than the current schedules. The first schedule requires 100% vesting after three years of service and the second requires 20% vesting after two years of service with an additional 20% vesting for each year of service, reaching 100% vesting after six years of service. This provision is effective for contributions for plan years beginning after December 31, 2001, with a delayed effective date for plans maintained pursuant to collective bargaining agreements. K. Prohibited Transactions – No change. EGTRRA section 655 modifies the effective date of the rule excluding certain elective deferrals (and earnings thereon) from the definition of eligible individual account plan by providing that the rule does not apply to any elective deferral which is invested in qualifying employer securities, qualifying employer real property, or both acquired before January 1, 1999. M. Partnership Issues – No change. N. Participation – No change. O. Miscellaneous Limits – Under the Taxpayer Relief Act of 1997, the former 15% tax on excess distributions and the 15% estate tax on excess retirement accumulations from qualified retirement plans, tax-sheltered annuities, and individual retirement arrangements is repealed. P. Miscellaneous Violations – No change. In addition to those named above, Jeremy Citro, Gene Kuehneman, Ed Nannenhorn, Corinna Nicolaou, and Roger Thomas made key contributions to this report.
The Internal Revenue Service (IRS) studied 401(k) plan compliance with Internal Revenue Code requirements for tax-qualified plans. GAO found that IRS's estimates of noncompliance were inaccurate. The study, which audited a sample of 401(k) plans, did not provide information on the severity of the compliance violations identified and did not determine the number of plan participants or the amount of assets associated with noncompliance errors. Only 27 of the 73 study questions identified as compliance indicators conclusively demonstrated whether a plan was compliant or not. Consequently, the 44 percent reported to have one or more instances of noncompliance is at best an upper limit on the extent of noncompliance found. IRS has chosen specific types of private pension plans to study in a manner similar to the one conducted on 401(k) pension plans. The data that IRS collects will be analyzed to determine the prevalence and types of noncompliance among the plans studied.
Currently authorized through federal fiscal year 2002, the TANF block grant represents an entitlement to states of $16.5 billion annually. Federal funding under the TANF grant is fixed, and states are required to maintain a significant portion of their own historic financial commitment to their welfare programs as a condition of receiving their full TANF grant—referred to as their maintenance of effort requirement (MOE).These two funding streams—federal TANF and state general funds for MOE—represent the bulk of the resources available to states as they design, finance, and implement their new low-income family assistance programs. (See figure 1.) Under TANF, states have the flexibility to design their own programs and strategies for promoting work over welfare and self-sufficiency over dependency. At the same time, states must meet federal requirements that emphasize the importance of work for those receiving assistance. To avoid federal financial penalties, in fiscal year 1997 states must ensure that 25 percent of their TANF families, rising to 50 percent in 2002, are engaged in work activities. In addition, the law prohibits the use of TANF funds to provide assistance for families with adults who have received assistance for more than 5 years. In addition to giving states more responsibility and flexibility in the design of welfare programs, TANF shifts the fiscal risk to states, thus highlighting the importance of fiscal planning, especially contingency budgeting. In the past, any increased costs were shared by the federal government and the states. Under TANF, however, if costs rise, states face most of the burden of financing the unexpected costs. States must also handle this responsibility in the context of any limitations—including legislative restrictions, constitutional balanced budget mandates, or conditions imposed by the bond market—on their ability to increase spending, especially in times of fiscal stress. States have various options and resources to help them handle this new fiscal responsibility. PRWORA provides states with the ability to save an unlimited amount of their TANF block grant funds for use in later years. These resources must be left in the U.S. Treasury until they are needed. States may also respond to the fiscal risks implicit in the new block grant environment by increasing the levels of their state “rainy day funds,” or by establishing dedicated reserves, consisting of state funds, for their welfare programs. PRWORA also creates two safety-net mechanisms for states to access additional federal resources in the event of a recession or other emergency—the $2 billion Contingency Fund for State Welfare Programs (Contingency Fund) and a $1.7 billion Federal Loan Fund for State Welfare Programs (Loan Fund). To address the objectives for this report, we collected fiscal information on all 50 states from the U.S. Department of Health and Human Services (HHS). In addition, we interviewed officials from state budget offices and reviewed state general fund budgets and low-income family assistance budgets. We selected the seven states examined in GAO’s parallel report, Welfare Reform: States are Restructuring Programs to Reduce Welfare Dependence (GAO/HEHS-98-109)—California, Connecticut, Louisiana, Maryland, Oregon, Texas, and Wisconsin. For this report, we added Colorado, Michigan, and New York to enrich the discussion of states’ efforts to budget for contingencies. These 10 states represent 53 percent of total program dollars and administer about half the nation’s caseload. We did not independently verify the reported levels of state spending nor whether reported federal or state spending met the qualifications set forth in the act. We conducted our fieldwork from April 1997 through February 1998 in accordance with generally accepted government auditing standards. (For more detail on our scope and methodology, see appendix I.) The act made sweeping changes to the nation’s cash assistance program for needy families with children and eliminated a family’s entitlement to federal assistance. These reforms gave states flexibility to design their own programs and strategies for acheiving program goals, including how welfare recipients would move into the workforce. The act changed the way in which federal funds flow to states for welfare programs. Under the old system of financing, matching grants provided states with resources to implement federal welfare programs. The federal match was largely open-ended so that if a state experienced caseload and related cost increases, federal funds would increase with state funds to cover expenditures for the entire caseload. This open-ended federal commitment provided that financing for every dollar spent on these programs was shared between the federal government and the states, thereby limiting the states’ exposure to escalating costs. In contrast, under the TANF block grant, the federal government provides a fixed amount of funds regardless of any changes in state spending or the number of people the programs serve. During periods when caseloads are decreasing, federal funds per recipient will be higher under TANF than under the old program. Conversely if caseloads and costs increase, federal funds per recipient would be lower. A state would then be presented with several options, including using resources previously saved for contingencies, reallocating budgetary resources to maintain program stability, reducing program benefits and/or services to ensure that previously allocated resources go further, or raising additional revenues through taxes or fees. PRWORA allows states more choices concerning the mix of services they can offer and the people they can serve, and these choices are likely to be affected by differences in the rules regarding the use of TANF and MOE funds. For example, state MOE funds may be used with more flexibility than TANF funds. TANF grant funds may be used for cash assistance, child care assistance, work placement programs, subsidized work programs and other efforts not specifically prohibited by PRWORA. MOE funds can be used not only for these purposes but also to provide benefits to some recipients excluded from TANF assistance. States make these budgetary decisions as part of their regular appropriations process. Since any unspent TANF funds remain available to states without fiscal year limitation, a decision to dedicate a portion of these funds for a future contingency represents one aspect of a state’s program budgeting under welfare reform. States have modified their policies to require and encourage welfare recipients and potential recipients to adopt behaviors that facilitate becoming more self-sufficient. For example, in our recently issued report on state program restructuring, we found that the proportion of recipients assigned to job placement activities—as opposed to education or training activities—was substantially higher in 1997 than in 1994. Furthermore, as states seek to expand the number of adults participating in work activities, they have generally expanded the roles of welfare workers to better support the work focus of their programs. These workers’ new responsibilities vary but include such tasks as motivating clients to seek work, exploring the potential for welfare diversions, and collecting more information about applicants and recipients to determine what they need to facilitate self-sufficiency. States are also expanding their programs to help families address barriers to employment. For example, these states are using a range of approaches to help recipients obtain reliable transportation, such as providing funding for rural transportation systems, enlisting volunteers to provide transportation for recipients, and providing funds for vehicle repairs. Substantial declines in welfare caseloads and increases in the number of welfare recipients finding jobs provide signs of early progress. For example, caseloads have dropped on average about 20 percent since 1996, when PRWORA was enacted, and by one-third since 1994. However, questions remain about what will happen over the long term to families that no longer rely on cash assistance but continue to need other kinds of assistance to maintain their employment and about how much these services will cost. Furthermore, as more families leave welfare programs, states may face increasing challenges in serving increasing proportions of long-term recipients with multiple barriers to employment. This adds to the uncertainty surrounding the resource needs of low-income family assistance programs in the future. Even if TANF caseloads continue to decrease over time, they may become more volatile. Such caseload volatility may put the states at greater risk for budgetary stress than did previous matching grant programs. We noted in our recent report on states’ efforts to restructure their welfare programs that although states have had a great deal of success implementing welfare reform programs in a strong economy, little is known about how a poor economy will affect their programs. It is possible that caseloads may prove more volatile under the new system than under the old. This is because the greater emphasis on work implies a tighter link to the state of the job market and hence to the economy. Although research on pre-reform caseloads found varying degrees of correlation between the economy (as measured by unemployment rates) and the AFDC single-parent family caseload, it generally found strong correlation between the economy and changes in the smaller AFDC-unemployed parent (AFDC-UP) family caseload. This difference has been attributed to the AFDC-UP caseload’s stronger connection to the labor market. Since the new TANF grant emphasizes work-related activities, TANF caseloads may act more like the AFDC-UP caseload rather than the single-parent AFDC caseload and hence be more closely aligned to the economy. Alternatively, some analysts suggest that future caseloads under TANF may not be as susceptible to economic downturns because states are beginning to place much greater emphasis on strengthening labor force ties. For example, many states have substantially increased the levels of resources allocated to job training, child care, and transitional medical care. These efforts may make the former welfare population less susceptible to being the first to be laid off in the event of an economic downturn than the AFDC-UP population was. Those who remain on the rolls may be a smaller as well as more stable population. These differing perspectives highlight the uncertainties that states will face as they implement and finance their new welfare programs and thus highlight the importance of budgeting for contingencies. Budgetary stress caused by caseload volatility may be compounded by the limitations placed on most states by constitutional or statutory requirements to balance their general fund budgets. For example, if revenues fall during an economic downturn, a state’s enacted budget can fall into deficit. State balanced budget requirements often motivate states both to reallocate resources within their budgets and cut program spending during recessions. The need to cut spending can be alleviated if a state has accumulated surplus balances in “rainy day” funds. These surpluses may be used to cover a given year’s deficit. However, unless there are reserves specifically earmarked for low-income family assistance programs, these programs will have to compete for “rainy day” fund resources with all other programs in a state’s general fund in times of budgetary stress. These factors together—the likelihood of increased volatility and the limited budgetary flexibility available during an economic downturn—point to the importance of state contingency budgeting. A combination of the decline in caseload levels, the higher federal grant levels, and the MOE requirement for states’ contributions to their programs means that most states have more budgetary resources available for their low-income family assistance programs since enactment of welfare reform than under prior law. In many states, caseloads began to decline even before the enactment of PRWORA. Since enactment, this trend has continued in all states except Hawaii and in many cases the trend has accelerated. (See appendix II for a more detailed discussion and caseload data.) The amount of each state’s block grant was based on amounts received by the state in 1994 and 1995, years when caseloads and spending were at historic highs. As a result, we calculated that 45 states were eligible to receive more in federal fiscal year 1997 for the TANF block grant than they received in 1996 under the previous welfare programs. (See appendix II for a more detailed discussion of the assumptions used in our estimates and our analytical techniques.) We estimated that if all states had drawn their entire 1997 TANF grant, the states would have received about $1.4 billion more under TANF than they received under previous welfare programs in 1996, when caseloads were much higher. It is important to note, however, that there is a great deal of disparity among the states in the levels of additional resources. These differences ranged from 70 percent more federal resources for Indiana to 7 percent less for Pennsylvania—with the median increase about 9 percent for all 50 states. Furthermore, states are required to maintain a significant portion of their own historic financial commitment to these programs. Like the TANF grant, this minimum MOE is fixed and does not depend on the number of people served or the types of services a state chooses to provide. The MOE requirement establishes a minimum, or floor, for state spending, but there is no federal ceiling on how much a state can spend. States face severe fiscal penalties if they do not meet their MOE requirement. The interaction between the MOE and the lower caseloads means that in 21 states, although total state spending went down, spending per recipient increased. For example, in Idaho the MOE requirement is about 28 percent lower than what the state spent in 1996; however, state spending per recipient will more than double from $870 to $1,849 per year. Aside from the nominal changes in funding, another way of viewing resources available for welfare is to compare total federal and state resources available under the block grant with what comparable federal-state spending would have been for 1997 caseloads under prior law. Overall, we calculated that under the block grant, 46 states wouldhave more total resources—state and federal—for their new welfare programs than they would have had under the old welfare programs—with a median increase of 22 percent—or about $4.7 billion more nationwide. This calculation represents the difference between states’ post-reform total budgetary resources—TANF plus MOE—and what they would have budgeted for their 1997 caseloads if they were still using the pre-reform cost structure. These differences are largely attributable to the change in financing mechanisms: total funding under the previous program was based on caseload, whereas under TANF, funding is based on federal and state spending levels in a prior period when caseloads were higher. (See appendix II for further discussion of the assumptions and analytical techniques used in our estimates.) Again, there was great variation among the 46 states with the estimated increase ranging from 1 percent in Alaska and Connecticut to 102 percent in Wyoming. Additional budgetary resources for state welfare programs present states with a unique opportunity to invest more in programs that can help people find and keep their jobs and prevent them from returning to welfare while still saving some resources for a “rainy day.” All 10 states we visited planned to use some of their additional resources to expand their programs. Most states recognize that achieving self-sufficiency and job placement calls for significant investment in social services and incentives. These states have generally not increased cash benefit levels; rather, they plan to spend additional resources for job placement services, child care, and other supportive services that can help welfare recipients make the transition to work. For example, Texas increased the budget for its job placement and training programs by about $100 million (or about 200 percent) in order to expand access to job placement services and enhance its “Invest in Long Term Success” initiative. This initiative (1) seeks to match employers with welfare recipients and provides recipients with targeted training opportunities to meet the needs of those employers, (2) enhances job retention services to help former welfare recipients keep their jobs, and (3) creates “local innovation grants” to support innovative welfare-to-work programs, such as micro-enterprise development funds. (See text box 1 for examples of how states are using federal and state funds to enhance their welfare programs.) Text Box 1: Examples of Expansion and Enhancements of State Welfare Programs Many states have used the additional resources available to expand and enhance their welfare programs by offering new services; to expand earned income disregards and transitional services to people who are working and are no longer eligible for cash assistance but still need help with child care, transportation, or continued case management; and to invest in new information technologies to prevent fraud, track cases, and improve services to clients. Texas increased spending on employment services to ensure sufficient funding to meet its federal work participation requirements and created new programs to train recipients for targeted jobs, provide innovation grants to local employment centers, and develop job retention and re-employment services. Louisiana approved a 24 percent increase in funding on services for vocational education, on-the-job-training, job search assistance and transportation subsidies used to enable clients to move from welfare to work. New York passed more than $230 million in new programs for employment training and job readiness skills, teen pregnancy prevention programs and new computer systems. In addition, New York enhanced funding for child care services by about $100 million. California increased funding for employment related services by $288 million--or 122 percent, and for Child Care services by $147 million--or 103 percent. Michigan increased funding for day care services by 9 percent and employment services by 24 percent. The additional funding for employment services is used primarily for transportation and other support services as a means of increasing the work participation rate among the two-parent family caseloads. Connecticut used its additional resources to increase child care funding, create a new early childhood development program, and establish a system of safety net services for families moving off welfare. Maryland increased spending on job training by 39 percent. Other program enhancements include one-time emergency assistance grants to welfare applicants and demonstration projects aimed at assisting welfare recipients to achieve economic independence. Wisconsin's total program budget will increase by 42 percent. The state plans to invest over $89 million more in child care services in state fiscal year 1998 and an additional $22 million in state fiscal year 1999. In addition, as part of an effort to “make work pay,” many states have changed their policies relating to the treatment of earned income from those previously in effect under AFDC to permit recipients to keep more of their monthly cash assistance payments or retain them for longer periods once they begin working. More than two-thirds of the states have increased the amount of assets and the value of a vehicle that recipients can own and still remain eligible for cash assistance. The asset and vehicle limits in the prior AFDC program were widely considered to be too low, creating barriers to families’ efforts to become more self-sufficient. As these changes allow more people to remain eligible for program benefits and to remain eligible for transitional benefits, total state program budgets have generally increased relative to caseload. In addition, the higher level of federal funds and lower caseloads enabled states to reduce their own funding for the program down to the required MOE level and still maintain higher total program budgets. TANF permits states to achieve some budgetary savings but the MOE requirement constitutes a higher level of spending per recipient in many states due to declining caseloads—limiting the level of savings the state can achieve during a period of declining caseloads. (See text box 2 for additional examples of how states have achieved budgetary savings in this manner.) In California, budget officials said that they were frustrated by the MOE requirement because it limited their budgetary flexibility. Given the fixed nature of the MOE levels, these officials noted that the state will no longer realize any budgetary savings from a declining caseload because they must spend the same amount of state funds on their welfare program as they did in the previous year even if their caseloads are lower. Text Box 2: Examples of States’ Use of Federal TANF Funds to Achieve Budgetary Savings The combination of additional budgetary resources and lower caseloads has permitted states to achieve state budgetary savings which were reallocated to other state fiscal priorities. These states were still able to meet their MOE requirement under TANF and many were also able to provide a higher level of state funds per case. A number of states show this substitution in their budget documents. Even though many state officials indicated that state funds withdrawn from their welfare programs were used in other health and human services programs, any state funds that were reallocated became part of the larger general fund and become available for any state funding priority. Oregon reduced the state's share of its total welfare program budget by nearly $55.2 million. These state funds, no longer needed to meet the MOE requirement, were reallocated to help finance other state priorities. However, our analysis shows that Oregon must spend about 27 percent more per recipient than it spent per recipient under prior law in order to meet its MOE . Michigan reduced its contribution to its welfare program by about $42 million but must increase the level of spending per recipient, as required under the MOE, by about 22 percent. Texas freed up $114.9 million in state funds in its welfare program to maximize the use of federal funds. Nevertheless, the MOE requirement serves to increase, by about 6 percent, the amount of state funds expended per recipient. New York took advantage of TANF's financing changes to provide over $344 million in fiscal relief to the state and localities by reducing the total state and local contributions to the program's financing by 16 percent. California reduced its own contribution to its welfare program by about $357 million, compared to past AFDC cost sharing ratios, but still met its minimum MOE requirement, which is about 7 percent lower than what it spent per recipient under AFDC. Colorado reduced general fund contributions to its welfare program by $8.3 million and the counties' contributions by $3.6 million for state fiscal year 1998. The state used the displaced general funds to increase funding for other state programs and required the counties to deposit their portion of the savings in local social services reserves. As allowed under PRWORA, 11 states reported that they transferred funds from TANF to either the CCDBG or the title XX Social Services Block Grant (SSBG). For example, both Connecticut and Wisconsin shifted TANF funds to their SSBG programs--$24 million and $32 million respectively. These states reduced, by an equal share, the level of state funding formerly dedicated to SSBG. Oregon provides an example of a state that used TANF funds to free up a portion of state funds for other state priorities. State officials told us that during budget deliberations for the 1998-1999 biennium, one of the Governor’s proposals was a major overhaul of the state’s school financing system. Although the state’s economy was sound and state revenues exceeded the forecast, the Governor informed state agencies responsible for program budgets that some state resources would have to be reallocated to the school financing initiative. According to agency officials responsible for Oregon’s welfare programs, many state programs were affected by this reallocation. Since their TANF grant was higher than what they had received in the previous biennium, their MOE requirement lower than what had been budgeted in the previous biennium, and their caseloads had declined by over 50 percent since 1994, they were able to reallocate nearly $55.2 million in state funds from their welfare program and still meet their MOE requirement. Those state general funds shifted out of their welfare program were reallocated to other programs within the Human Services department to cover budgetary needs for planned program expansions in, for example, the Oregon Health Plan and for other state general fund shortfalls resulting from the Governor’s overall budget priorities. As a result, the federal share of the state’s TANF program expenses now totals 68 percent, compared to the previous federal share of about 56 percent. In contrast, Maryland took a different approach by permitting the state’s Department of Human Resources to reinvest the state’s budgetary savings that result from caseload reductions. Ten percent of the total savings achieved in the state each year may be allocated to demonstration projects to test innovative approaches to reduce welfare dependency. Any remaining savings may be distributed by the state, with about half returning to the local social service departments that achieved the caseload reductions as a performance bonus. These “reallocated savings” may be used for, among other things, child care, welfare avoidance grants, drug treatment for targeted recipients, transportation emergency funds, or any other direct service to applicants or recipients that are considered appropriate to accomplish the program’s goals. There are several differing perspectives for assessing states’ fiscal commitment to these programs. Although states have been able to reduce their commitments of state funds below previous levels, they nevertheless must still maintain spending at higher levels than they would have spent under the matching grant programs. In fact, given the caseload decline, these lower levels of state spending are providing more per recipient in many states. Some states have correspondingly argued that the MOE requirement prevents them from achieving even greater savings and from reaping the budgetary rewards traditionally associated with a declining caseload. On the other hand, some have raised concerns that reductions in overall state spending could limit welfare reform’s potential to provide the resources necessary to move people from welfare to work. In most of the states we visited, decisions on how to allocate the additional budgetary resources available for low-income family assistance programs were made in a context of strong state economies, and most forecasts expected these trends to continue in the short term. Given the strength of their economies, most states we visited did not see an immediate need to prepare for a recession for their welfare programs. Based on past experience, some state officials said that if the economy worsened and states’ revenues fell, the budgetary impact would be felt in all state programs—including welfare. Nine of the 10 states we visited have established general fund “rainy day” funds to be used for downturns in state economies and budget shortfalls, but only four of the nine have significant balances. Some state officials believe that sound fiscal planning should include some type of dedicated reserve for contingencies and other future welfare program needs. These officials said that a future downturn could reduce funds available for benefits at a time when they are most needed. This could undermine welfare reform by reducing supportive services crucial to the success of a welfare to work strategy. Four of the states we visited had enacted budgets that established dedicated reserve funds although the amounts saved were small relative to total program budgets. Three of the four states budgeted some federal TANF funds to a special program-specific reserve account, and one state, Maryland, set aside state general funds for contingencies. (See table 1.) States that established reserves cited the possibility of future economic downturns and other factors in their decision-making. Officials in Maryland expected that at some point in the future—as has happened in the past—an economic downturn will bring higher caseloads and higher program costs. Using state general funds, Maryland established a $15.7 million reserve fund dedicated to low-income family assistance programs in part to address concerns about assuring programmatic stability during such a period of fiscal stress. In another example, Colorado allocated $5.9 million of its federal TANF grant to a long-term contingency reserve in the event it experiences recession-driven caseload increases in the future. In contrast, officials in other states felt that sound fiscal planning should focus on investing maximum resources now in a welfare-to-work strategy. For example, officials in Oregon said that caseload levels have not fluctuated with the health of Oregon’s economy (as measured by the unemployment rate), and they do not expect this to change. They believe caseloads will continue to decline if sufficient funds are invested now in appropriate services to achieve recipients’ long-term self-sufficiency. These officials attributed the state’s sizeable caseload declines, from a reported 117,656 recipients in 1993 to 56,299 in 1997, to full investment in the program and the new emphasis on work. Similarly, Michigan officials stated that past experience with caseload changes during recessions may not be relevant in a post-reform environment. These officials expect that while caseload levels have not become completely independent of the economy, they will eventually stabilize and then fluctuate with the economy around some new, lower core-caseload level which they believe will be so far below their historic high that they do not expect to have difficulties financing the costs of any future caseload fluctuations. The National Association of State Budget Officers (NASBO), the National Governors’ Association (NGA), and some state officials have suggested that proposed federal rules may actually discourage states from establishing dedicated state reserves composed of general funds. For example, in Michigan, state budget officials considered establishing a reserve with state general funds until the state learned that reserved funds would not count toward meeting the states’ TANF MOE requirement for the year in which they were reserved. Although Maryland did establish a state reserve, state officials there raised similar concerns and there are currently no plans to add more state funds to this fund. The 1996 reforms of national welfare policy focused considerable attention on the federal government’s role in welfare. While the resulting legislation devolves much programmatic and financial responsibility to the states, a significant federal role remains in providing a substantial share of the funding for these programs, setting national program objectives, establishing reporting and accountability criteria, and ensuring a safety net. Fiscal planning responsibilities were devolved to the states. The states were granted the ability to save federal funds without fiscal year limitation—in other words to plan for future contingencies. The act also provides two additional sources of federal funds—the Contingency Fund and Loan Fund—to be available if economic conditions affect caseloads and increase the fiscal burden on states. The Contingency Fund provides states with a limited amount of matching funds, much like under AFDC, and requires states to increase their own spending in order to receive federal matching funds. The Loan Fund allows states to borrow a limited amount as well, but they must repay this loan within 3 years at a rate equal to the yield on a similar Treasury security. However, states have registered concerns about the design of these federal contingency mechanisms. According to financial data reported by the states to HHS, many states had not spent all of their fiscal year 1997 TANF block grants by the end of the federal fiscal year, and some left considerable balances at the Treasury. Thirty-one states carried over a total of more than $1.2 billion. While these resources can certainly be used in the event of an economic downturn, the presence of this apparent fiscal cushion may reflect the transitional nature of the first year of the grant rather than explicit state savings decisions. While we found that some states left a portion of their TANF grants in reserve at the Treasury, we generally did not find a clear relationship between the unspent TANF balances and states’ contingency plans. During this transitional period, states generally have been unable to forecast caseload levels with any degree of accuracy. In all states but one, caseloads continued to decline, often at rates far faster than expected. (See appendix II.) State program budgets are prepared based on the projected caseload levels. Declines that are greater than expected have resulted in large unspent balances. Furthermore, the timing of the states’ draws on the TANF funds, and thus the levels of unspent resources, depend on when the states submitted their state plans, enacted their laws, and implemented their reforms. In addition, a Treasury policy statement issued in June 1997 affected the timing of states’ TANF draws. It requires that for each allocation of federal funds a state draws down, it must spend a proportional share of its own MOE funds. In this policy statement, Treasury applies principles set forth in the Cash Management Improvement Act of 1990 (CMIA) to the TANF block grant. CMIA settled a long-standing dispute between the federal government and the states over the disbursement of funds for federal programs administered by the states. CMIA helps to ensure that neither party incurs unnecessary interest costs in the course of federal grant disbursements.HHS recognized that this policy might, in some cases, limit a state’s financial flexibility. It noted that because PRWORA does not specifically exempt the TANF program, CMIA principles apply whenever state MOE and federal TANF funds can be used interchangeably. HHS indicated that if a state were able to demonstrate a bona fide need to draw its TANF funds under a different schedule than it spends its state MOE funds, the Office of Management and Budget (OMB) would consider granting an exemption to the proportionate draw down requirement. Given these various transitional issues, current levels of unused TANF may not be a reflection of state decisions to save for the future and therefore may not be a reliable indicator of future balances. Also, a great deal of uncertainty exists surrounding future welfare costs. PRWORA requires states to place a growing percentage of their caseload in work related activities over the next 5 years. As we noted in our related report on states’ efforts to restructure their welfare programs, data from states that have implemented early reforms and experienced large caseload reductions indicate that many of the remaining recipients have multiple barriers to participation in work activities, such as mental health and substance abuse problems, and domestic violence. Even if the economy remains favorable, per recipient costs may need to grow as states will have to place more of their caseloads in work-related activities and a greater percentage of their caseloads will need services that address the barriers to participation. The way federal policies are implemented may play a role in influencing states’ plans for future contingencies. Organizations representing states and officials in some states we visited suggested that cash management rules may reduce states’ incentives to save federal TANF funds for the future. Although states may carry forward any unspent federal TANF funds without fiscal year limitations, these unspent TANF reserves must be kept at Treasury—not drawn down and kept in a state reserve. NASBO, NGA, and the National Conference on State Legislatures (NCSL) have observed that balances left at the U.S. Treasury may suggest to the Congress that grant levels are too high and these funds remaining at Treasury are not needed by the states. Citing past experience with other federal grant programs, such as the State Legalization Impact Assistance Grants (SLIAG) program, where initial levels were reduced over time and federal requirements increased, officials at NCSL expressed a concern that unused TANF funds perceived as “excess” would become vulnerable to reallocation by the Congress to other areas of national need. These same concerns were also expressed in some of the states we visited. Consequently, some state officials suggested that these concerns might prompt states to spend rather than save a greater proportion of their TANF funds. In contrast, the Urban Institute argues that the federal application of CMIA to TANF is important in ensuring that states have some reserves for a future contingency. Since CMIA prohibits states from spending federal TANF funds until they are needed and requires that each draw of federal funds be matched by state funds, the Institute believes that the application of CMIA to TANF has helped to ensure that some federal funds were held in reserve. This is especially important, the Institute notes, because the draft HHS regulations prohibit state funds held in reserve to be counted as MOE, effectively creating a disincentive for states to create reserves with their own funds. Under HHS’ draft regulations, a state must report how much TANF and MOE funds it spent on a variety of activities, such as cash assistance, child care services, and work activities, but no mechanism currently exists for states to inform the Congress about their future plans for spending or saving TANF balances left at Treasury. Moreover, available data on TANF balances are generally midyear data from the perspective of states’ budgets and appropriations decisions—not data used in state decision-making. In commenting on HHS’ draft TANF regulations, the Center on Budget and Policy Priorities suggested that more information about state plans for saving TANF could aid congressional oversight of welfare reform. The Center suggested that as part of state financial reporting, HHS could give states the option to record the amount of TANF funds they plan to set aside for future contingencies, similar to accounts established in three of the states we visited. Allowing for more transparency and information regarding states’ contingency budgets and the nature of the balances left in reserve in the U.S. Treasury could provide states with an opportunity to clarify their longer term fiscal plans for the program. Consequently, this would help the Congress gain a better picture of the nature of the unspent TANF balances. Officials in most of the states we visited indicated that they would not use the Contingency Fund for State Welfare Programs (Contingency Fund) nor the Federal Loan Fund for State Welfare Programs (Loan Fund) even if they became eligible to do so. These state officials told us that neither the Contingency Fund nor the Loan Fund presented states with a viable option for future contingencies. Complex federal reconciliation provisions and a more stringent federal definition of qualified expenditures have led some state officials to conclude that the costs of gaining access to the Contingency Fund outweigh any benefits to the states. To be eligible to receive federal matching funds through the Contingency Fund, a state must meet certain conditions. First, a state must qualify as “needy” under one of two triggers: (1) in the most recent 3-month period, its average unemployment rate (seasonally adjusted) must have been at least 6.5 percent and must have increased at least 10 percent from the corresponding rate in at least one of the 2 preceding years or (2) its average monthly food stamp caseload for the most recent 3-month period must have increased at least 10 percent compared to what enrollment would have been in the corresponding 3-month period of fiscal year 1994 or 1995. Second, a state must meet a higher and more stringent level of MOE spending. None of the states included in our study had budgeted 100 percent for MOE. Most states we visited planned to meet only the minimum MOE required by PRWORA (between 75 and 80 percent of their 1994 expenditure levels), thereby requiring a substantial increase in spending to qualify for the Contingency Fund. In addition to the requirement that states raise their spending levels to 100 percent of historical expenditures to gain access to the Contingency Fund, a more limited range of a state’s spending can be counted toward the Contingency Fund MOE than for the general MOE. Although states may count expenditures on separate state programs that can serve TANF-ineligible clients as part of their general MOE requirement, these same expenditures cannot be counted toward the 100 percent Contingency Fund MOE. HHS agrees that the operation of the Contingency Fund would be simplified by allowing states to count the same expenditures toward both the TANF MOE and Contingency Fund MOE. However, both HHS and the Congressional Budget Office (CBO) note that changes that would ease access to the Contingency Fund would increase the costs of the Contingency Fund in budgetary scoring terms and could be subject to challenge under budget rules unless offsets were found. Once a state meets these conditions, it is eligible to draw from the Contingency Fund. The state’s annual draw is limited to 20 percent of its annual TANF grant. However, the state must match all draws from the Contingency Fund with additional state money as determined by its matching rate under the Medicaid program, or federal medical assistance percentage (FMAP). Moreover, there is a year-end reconciliation process which can reduce state allotments depending on the number of months during the year the state was eligible. (See text box 3 for more detailed discussion of this point.) Lags in data availability mean states qualifying on the basis of food stamp caseload increases would not even be aware of their eligibility until some time after the need arose. Text Box 3: The Contingency Fund’s Annual Reconciliation Process As currently structured, the reconciliation process favors states that are "needy" within a single federal fiscal year compared with those that are "needy" in months that overlap consecutive federal fiscal years. A state that is "needy" for all 12 months during a federal fiscal year would have to match all funds drawn at its applicable fiscal year FMAP rate with no adjustments for the number of months it was eligible because it was needy throughout the year. However, a state that is "needy" for 12 consecutive months that span 2 federal fiscal years (e.g., 6 months in each year) with an identical FMAP rate will see its federal match rate reduced by half because of the adjustment made for the number of months the state was needy in each year. To illustrate, the state that was needy for an entire federal fiscal year and was eligible for and had drawn $20 million of Contingency Funds would be able to retain these funds, provided the state had spent the necessary matching funds. In contrast, the state that qualified as needy for the same number of months and was eligible for the same amount from the Contingency Fund but overlapping 2 fiscal years would initially obtain $10 million for each year, reflecting its 6 months of eligibility in each year, but then the state would have to remit half of these federal funds after each year's reconciliation. This latter reduction is the result of pro- rating the state's grant by the number of months it was eligible for contingency funds, even though the state's initial claim for each year was already based on the number of months of eligibility. As a result, the second state would be allowed to retain a total of $5 million of federal funds in that fiscal year, $5 million of federal funds in the next fiscal year--a total of $10 million even though its eligibility over these 2 years was the same as the state receiving $20 million. In addition, the second state would have to meet the Contingency Fund MOE in both years. Furthermore, the Adoption and Safe Families Act of 1997 (Public Law 105-89), §404 reduces the cap on Contingency Fund spending by $40 million over 4 years. If a state drew funds in a year affected by the reduction, the amount it could retain would be reduced by its share of the annual reduction. For example, the total reduction in fiscal year 1999 is $9 million. If two states drew funds in fiscal year 1999, at the end of the year, these two states’ allocation would be reduced by $4.5 million each. If the states had already received their allocation they would have to remit $4.5 million each. Although eight states qualified as needy and could have gained access to the Contingency Fund in fiscal year 1997, according to HHS, only New Mexico and North Carolina requested and were awarded funds. Although Hawaii would have been eligible for resources from the Fund for all of federal fiscal year 1997, the state determined that it did not have enough qualifying state expenditures to meet the Fund’s 100 percent MOE requirement. California was also eligible for Contingency Funds for the first 4 months of federal fiscal year 1997 (October 1, 1996, through January 31, 1997). Upon completing the reconciliation process, the state calculated that it would have to increase its own spending by almost $1.9 billion in order to receive $249 million from the Contingency Fund and declined to do so. According to HHS officials, North Carolina and New Mexico had been awarded funds on May 29, 1998. As of July 24, 1998, neither state had completed the reconciliation process and HHS officials expect that both states will be required to remit a large share of these funds. State officials also indicated that they are unlikely to borrow from the Loan Fund established in PRWORA. Officials in some states indicated that borrowing specifically for social welfare spending in times of fiscal stress would not receive popular support. We have previously reported on states’ reluctance to participate in a similar loan program in the Unemployment Insurance (UI) Trust Fund. The UI program originally operated as a forward-funded system with benefit levels and tax rates set so that the program could “save for a rainy day” by building reserves during periods of economic expansion and to be able to pay UI benefits during economic downturns. This federal-state partnership is financed through payroll taxes that are used to pay benefits, finance administrative costs, and maintain a loan account from which financially troubled states can borrow funds to pay UI benefits. By the early 1980s, as a result of severe back-to-back recessions, many states had depleted their reserves and began to rely on federal loans to sustain UI benefits. The Congress enacted several laws designed to move the system toward healthier reserve balances. These changes made it more expensive for states to borrow from the federal government. State loan repayments increased, and states took other actions including cutting program benefits, limiting the length of time recipients could receive benefits and, in some cases, increasing payroll taxes—jeopardizing the program’s objective of helping to stabilize the economy during recessions. Although the UI program is designed to allow states to build reserves during good economic times in order to pay benefits during downturns—and allows states to borrow from the federal government—these provisions have not always provided sufficient protection against a need for additional federal resources. For example, in 1993 the Congress passed a $4 billion supplemental appropriations bill to finance emergency unemployment benefits with federal funds due to shortfalls in state unemployment compensation trust funds and the unwillingness of states to borrow federal funds or expend state general funds on UI. With the passage of PRWORA, the nature of the partnership between states and the federal government for designing and financing welfare programs changed. Much of the downside fiscal risk has been shifted to the states by virtue of the fixed nature of the TANF block grant. States have gained important new flexibility in making decisions, and their provisions for financing their programs in the near and long term will have an important bearing on the future success of welfare reform. States currently have more resources available for these programs under TANF than would have been available under the old financing system, but future fiscal demands are uncertain. The fixed nature of the TANF block grant, the potential volatility of welfare caseloads and program spending along with the “pro-cyclical” budgetary pressures states face under their balanced budget requirements highlight the importance of both the states’ own funding for contingencies as well as PRWORA’s provisions for contingencies—allowing states to save unused TANF funds for future years’ use and the two safety net mechanisms, the Contingency Fund and the Loan Fund. As these provisions attest, the federal government retains a stake in states’ fiscal decisions affecting the sustainability of the program during downturns. As we noted, states see limited incentives to use the Contingency and Loan Funds, in part because the costs associated with gaining access outweigh many of the benefits that these mechanisms may offer. Although improving access might help states cope with the effects of economic slowdowns, easing these funds’ requirements could prove costly and may lessen the incentives for states to fulfill their own responsibilities for fiscal planning and program financing. While the Congress may very well revisit the design of these funds as the implementation of TANF unfolds, for now the TANF balances left at Treasury constitute the principal source of federal contingency funds for the states. While many states had TANF balances at Treasury at the end of fiscal year 1997, current reporting requirements do not clearly identify states’ plans for these balances. We identified a number of transitional issues that may have affected the levels of balances. A number of factors unrelated to states’ savings decisions have influenced the levels of these funds, including cash management practices, slow-starting programmatic spending, and caseload declines. States we visited took different approaches to contingency budgeting for TANF, and states’ practices may change as they gain experience in implementing their reformed programs under the grant. Better information on states’ plans for future contingencies, including on states’ unused TANF balances, could play a role in the continuing dialogue between states and the Congress as welfare reform continues to unfold. In the new block grant environment, the federal government has an interest in encouraging state savings, but what constitutes “adequate” saving will remain a state judgment made under conditions of considerable uncertainty. Finding the right balance between saving budgetary resources for future contingencies and investing them in programs that help people make the transition from welfare to work will be one of the main challenges for states as they develop strategies to address the needs of low-income families. We recommend that the Secretary of Health and Human Services consult with the states and explore various options to enhance information regarding states’ plans for their unused TANF balances. Such information might include explicit state plans for setting aside TANF-funded reserves for the future. Allowing for more transparency regarding states’ fiscal plans for TANF funds could enhance congressional oversight over the multi-year time frame of the grant and provide states with an opportunity to more explicitly consider their long-term fiscal plans for the program. We received comments from HHS, which are reprinted in full in appendix III. In addition, portions of the report were reviewed for technical accuracy by officials in the states we visited, and their comments were incorporated as appropriate. We also asked the National Governor’s Association (NGA), the National Conference of State Legislatures (NCSL), and the American Public Welfare Association (APWA) to review the report. We incorporated comments from these organizations as appropriate. HHS, NGA, NCSL, and APWA generally agreed that this report is an accurate and comprehensive portrayal of the current fiscal issues facing states as they make progress toward implementing welfare reform. HHS, however, expressed concern that our analysis of states’ additional budgetary resources did not take into account that states must now engage a significantly higher percentage of their caseload in work activities and that the costs of operating a welfare program before reforms could not be compared to the costs of operating a welfare program under TANF. Indeed, HHS, NGA, and NCSL all emphasized that under TANF, states are expected to do much more than under AFDC. Our analysis was not meant to compare the real costs of operating the AFDC program to the real costs of operating a welfare program under TANF, nor was it meant to minimize the additional responsibilities incumbent on states as they make progress implementing welfare reforms. Instead, we sought to illustrate the levels of resources that are available to finance the dramatic changes in states’ welfare programs. In our recent report on state program restructuring, we described how states are moving away from a welfare system that focused on entitlement to assistance to one that emphasizes finding employment as quickly as possible and becoming more self-sufficient. For example, we found that states were using some of their additional budgetary resources to enhance support services, such as transportation and child care, for recipients participating in work activities and poor families who have found jobs and left the welfare rolls. We concluded that the confluence of a strong national economy that fosters employment opportunities and the availability of additional budgetary resources has created an optimal time for states to reform their welfare programs. HHS concurred with our recommendation, but NGA and NCSL expressed concerns that it would lead to an increase in the reporting requirements already imposed on the states. It is because we agree that costs associated with collecting information should not outweigh its usefulness that we suggested HHS and the states work together on developing a reporting option. We recognize that estimates of future caseloads would affect estimates of future unspent TANF balances and that developing accurate caseload estimates at this early stage in TANF implementation poses problems for states. However, as NCSL and NGA agreed, information on states’ plans for unspent TANF balances could prove useful as the Congress executes its oversight responsibilities of the TANF program and the program’s funding levels. We continue to believe that the Congress would benefit from more complete information on states’ plans for future contingencies, including unspent TANF balances. As states, HHS, and other cognizant parties meet to discuss final reporting requirements under TANF, we urge them to work together to explore reporting options. This discussion can form part of the ongoing dialogue on how best to restructure governmental roles and responsibilities to achieve the goals of welfare reform. NGA, NCSL, and APWA also underscored states’ concerns about applying CMIA to TANF. In their view, CMIA limits state flexibility by restricting TANF funds from being held in state reserve accounts and by requiring that all draws of federal TANF funds be matched with state MOE funds. According to APWA, because caseloads have declined in many states and because states must still meet their MOE requirements, many states are not even drawing federal funds until the fourth quarter of the federal fiscal year. These comments reinforce the point made in our report that cash management practices may have had an impact on the level of TANF resources remaining at Treasury at the end of the federal fiscal year. On a related issue, APWA cited the Congress’ recent decision to reduce the proportion of TANF a state can transfer to its SSBG program as effectively limiting states’ flexibility in TANF draw downs, which in APWA’s view, penalized states for leaving TANF funds in the Treasury. NGA and NCSL urged that the federal Contingency Fund be redesigned to be a more attractive option for states. Specifically, NGA recommended changing the Contingency Fund’s MOE provision to conform to the TANF MOE requirement and to change the reconciliation requirement to eliminate the reduction in the state’s match rate that is based on the number of months the state was eligible to access the Contingency Fund. We noted in our report that very few state budget officials perceive that the Contingency Fund will serve to help states maintain stable program financing if caseloads rise during a recession. However, HHS noted that redesigning the Contingency Fund could result in significant increases in federal costs. The Contingency Fund was designed to balance competing objectives. On the one hand it could not be so generous as to encourage routine or casual use which would have led to significantly higher federal costs. On the other hand, if the Contingency Fund is overly restrictive many states would be disinclined to use it and it would not serve as the fiscal stabilizer it was intended to be. Balancing these competing objectives will likely challenge the Congress as well as the states as they continue to make progress in implementing their welfare programs under a variety of economic and demographic conditions. As agreed with your office, unless you release this report earlier, we will not distribute it until 30 days from the date of this letter. At that time, we will send copies to the Chairmen and Ranking Minority Members of the Senate Committee on Finance and the Senate Subcommittee on Social Security and Family Policy, Committee on Finance, and other interested parties. We will also make copies available to others upon request. If you have any questions, please call me at (202) 512-9573. This review was conducted in conjunction with a review conducted by GAO’s Health, Education and Human Services (HEHS) Division. The HEHS review studied welfare reform implementation in seven states: California, Connecticut, Louisiana, Maryland, Oregon, Texas, and Wisconsin. These states were selected because they represent a diverse range of socioeconomic characteristics, geographic locations, and experiences with state welfare initiatives. According to the U.S. Bureau of the Census and HHS estimates, the states ranged in population from about 3.2 million (Oregon) to about 31.0 million (California) in 1996; in median income for three-person families, from about $33,337 (Louisiana) to about $52,170 (Connecticut) in federal fiscal year 1997; and in overall poverty rates, from 8.5 percent (Wisconsin) to about 19.7 percent (Louisiana) in 1995. Some states like Wisconsin and Oregon have had reform initiatives in place for several years that include elements similar to those in PRWORA, such as time limits for welfare benefits (Wisconsin) and increased work participation requirements (Oregon and Wisconsin); others, such as Louisiana, have been operating more traditional cash assistance programs with welfare-to-work components and were only beginning more extensive reforms in fiscal year 1997. In addition, in order to capture a broader picture of the fiscal and budgetary implications of welfare reform, we added three additional states to our review: Michigan, New York, and Colorado. Historically, Michigan’s economy and budget have been highly sensitive to economic change, and in 1977 Michigan created a Budget Stabilization Fund to help stabilize the state’s fiscal policy. Like California, New York and Colorado have county administered welfare programs that share in the costs of the welfare programs. We added these states to obtain the views of local officials on the fiscal implications of welfare reform in their states. By including New York, Michigan, and Colorado, we also increased the geographic diversity of our study states and included states that when combined with the other seven states administer the welfare programs of about half the nation’s total caseload. To meet our objectives, we interviewed state and local officials in the local low-income family assistance programs and in program and state-wide budget offices. Specifically, we met with officials from the following organizations during our state visits: executive branch budget offices; legislative budget/finance committees; social service agencies; selected county program and budget offices; and advocacy groups. We also reviewed state program and budget documents, the PRWORA legislation, HHS regulations and policy guidance, prior GAO reports, and welfare experts’ studies. We also analyzed fiscal data related to all 50 states’ low-income family assistance programs obtained from HHS to determine the level of additional budgetary resources states’ received as a result of welfare reform. (See appendix II for a more detailed explanation of the methodology used in this calculation.) We did not verify the accuracy of these data. We requested written comments on a draft of this report from HHS, NGA, NCSL, and the American Public Welfare Association (APWA). These comments are discussed in the letter, and HHS’ comments are reprinted in appendix III. States currently have more budgetary resources available for their welfare programs than they would have had under prior law. This is primarily the result of a combination of three interrelated factors: (1) the unprecedented declines in caseloads, (2) the new federal financing mechanism, or block grant, that provides resources to the states without regard to the numbers of people states’ welfare programs serve, and (3) the maintenance of effort requirement on states that establishes a minimum, or floor, funding level for their state welfare programs. This appendix describes the influence each of these factors has on total resources available, and then presents our estimates of the combined effect they have on total available resources. Given the fixed nature of the federal funding stream and states’ minimum MOE contributions, caseload volatility will dramatically affect the resources available per recipient for state welfare programs. As caseloads drop, there will be more resources available to the states to finance their welfare programs since programs’ finance needs are largely driven by caseload assumptions. In contrast, if caseloads rise, there will be fewer federal dollars per recipient when compared to the previous budget period, and states will need to raise additional resources on their own or adjust their programs to make their resources go further. In many states, caseloads began to decline even before the enactment of PRWORA and continued to do so after passage of the law as shown in Table II.1. While there remains controversy over some of the reasons for the caseload declines, research indicates that important factors include the strong economy and changes in federal and state welfare policies. Total AFDC/TANF recipients by state (continued) Total AFDC/TANF recipients by state (continued) Overall, states’ caseloads have declined by about a third since 1994. However, this national average masks the differences among the states in the magnitude and timing of their caseload declines. For example, in North Carolina, the caseload dropped by about a third since 1994, with a decline of 20 percent before federal reforms had been enacted and an additional 13 percent decline in the last year. In contrast, in New Mexico, the overall decline is also near the national average; however, virtually all of the change occurred in the last year—after PRWORA passed. There is also a large disparity among the states in total caseload change since 1994, ranging from an decrease of 70 percent in Wyoming to an increase of 22 percent in Hawaii. Tables II.2 and II.3 present estimates of the change in federal resources available to implement welfare reform. Table II.2 presents our estimates of the differences between available nominal federal resources for family assistance programs under AFDC and under TANF. Our analysis shows that 45 states will receive more federal resources under TANF than they received in the last year before reform. TANF provides about $1.4 billion more federal dollars to the states than they received under the consolidated programs in 1996, when caseloads were on average much higher. These differences ranged from 70 percent more for Indiana to 7 percent less for Pennsylvania; the median increase was about 9 percent. Total federal TANF- related program spending in 1996 TANF block grant (continued) Table II.3 presents these additional federal resources on a per recipient basis to take into account the significant declines in caseload that have occurred since passage of PRWORA. These estimates of states’ additional federal resources considered on a per recipient basis present a different picture not only because the estimates take post-PRWORA declines in caseloads into account but also because of differences among the states in their expenditures for emergency assistance and administration, which TANF also replaced. Adjusting for smaller caseloads, on average, the new financing mechanisms in TANF provide states with about $614 more federal dollars per recipient than the consolidated programs provided in 1996. This table shows an increase in federal resources for all states but one, with a median increase of 47 percent more than before reform. The change in federal resources per recipient ranged from an increase of 334 percent (Wyoming) to a decrease of 10 percent (Hawaii). Our analysis shows that 38 states received an increase of 25 percent or more. Given a declining national caseload, the state MOE requirement further augments the budgetary resources available on a per recipient basis to finance states’ low-income family assistance programs. The state MOE is based on spending for a larger set of programs than the TANF block grant and was pegged to state spending in those programs during a period of high caseloads and high spending. In the absence of a MOE requirement, states could draw down all of their federal TANF grants, and then reduce their own financial commitment to the program to whatever level would maintain a current service budget baseline. In all states but one—Indiana—the 80 percent TANF-MOE requirement is less than what the state spent on those programs in 1996 (see table II.4) and would allow them to reduce their own financial commitment to the program. Differences in State Resources Under AFDC in 1996 and Under the TANF Block Grants MOE related state spending in 1996 (continued) However, the minimum MOE requirements, taken together with the further decrease in caseloads had the effect of increasing the level of state resources spent on a per recipient basis for a number of states. In table II.5, we estimated that 22 states must spend more per recipient than they spent per recipient under AFDC in 1996, assuming state spending at 80 percent MOE. (continued) Another way to estimate the total resources available for welfare programs is to compare total federal and state resources available under the block grant with what comparable federal-state spending would have been for 1997 caseloads under AFDC. To estimate changes in total available budgetary resources, we began by constructing a current-services baseline for pre-reform spending. We constructed our baseline by adding actual state and federal expenditures in federal fiscal year 1996 for the programs TANF replaced. We calculated total spending per recipient and then adjusted all baseline components for inflation except cash assistance.Finally, to take recent caseload declines into account, we applied these per recipient costs to 1997 caseloads. Using a states’ total annual TANF grant, we calculated the federal contribution to the total resources available. Since the federal contribution is now a block grant, these funds are available, irrespective of the needs in a state. Once again, we assumed that states would budget at 80 percent MOE. Since the MOE requirement establishes a minimum, or floor, on state spending, a state can spend more than minimally required if it chooses—raising the total levels of budgetary resources available. Table II.6 presents our estimates of the total additional budgetary resources available to states to design, finance, and implement their family assistance programs due to TANF. These estimates represent the difference between states’ post-reform total budgetary resources (TANF plus MOE) and what they would have budgeted for their 1997 caseloads if they were still using the pre-reform 1996 cost structure. That is, table II.6 shows “additional resources” as the difference between states’ new total budgetary resources and our construction of the current services baseline. The analysis, which takes caseload declines into account, suggests an even greater change in resources than merely looking at nominal changes in federal and state resources. Combining the effects of the increased federal resources and the act’s mandated floor on state spending, our analysis indicates that 46 states will have more total—federal TANF and state MOE—resources available than they would have had without reform. Our estimates of these additional budgetary resources totaled about $4.7 billion—or, on average, states will have 25 percent more in total budgetary resources available for their welfare reform programs. As with the other analyses, there is wide variation among states—ranging from 102 percent in additional resources for Wyoming to total fewer resources in Delaware, Hawaii, Nebraska, and Pennsylvania. Additional budgetary resources as a percent of constructed current services baseline (continued) Additional budgetary resources as a percent of constructed current services baseline 2. The “total” presented in this table represents the percent difference in the nationwide totals. The “average” is a simple average of the percentage differences across states, with each state having equal weight. The following are GAO’s comments on the Department of Health and Human Services’ letter dated July 23, 1998. 1. See “Agency Comments” section of the report. 2. Text (now on page 3) amended. 3. Text of Table 1 of page 18 changed to reflect that figures represent state reserves and are distinct from the Federal Contingency Fund for State Welfare Programs. 4. HHS refers to §409 (a)(7)(B)(i) to suggest that state MOE funds may only be used on 4 designated activities; (aa) cash assistance, (bb) child care assistance, (cc) educational activities designed to increase self-sufficiency, job training, and work . . . , (dd) and administrative costs in connection with the matters described in items (aa), (bb), (cc), and (ee) . . .” HHS omits (ee) from its list of qualified state expenditures. This part allows states to spend their own funds in any manner that is reasonably calculated to accomplish the purpose of TANF. In its proposed rule (see §273.2), HHS interprets §409(a)(7)(B)(i) to mean that a state may count as MOE its expenditures under all state programs, i.e., the state’s TANF program as well as any separate state program that assists “eligible families” and provides appropriate services or benefits. Thus, while MOE funds must be used on eligible families (as defined by the state) and on activities that can reasonably be calculated to accomplish the goals of TANF, they can be used to provide support to certain categories of clients that are prohibited from receiving federal TANF assistance. If states choose to operate separate state programs, they have more flexibility in the use of state funds than they have in the use of federal funds. We continue to believe that these differences will have an impact on the choices states make with regard to their programs, specifically the mix of services they can offer and the people they can serve. 5. Text (now on page 24) changed to reflect that HHS concurs with CBO that allowing states to count the same expenditures toward both the TANF MOE and the Contingency MOE would increase the costs of the Contingency Fund in budget scoring terms and could be subject to a challenge under the budget rules unless offsets were found. 6. Text (now on page 25) amended to reflect new information. Raymond G. Hendren, Senior Auditor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the states' fiscal decisions for the Temporary Assistance for Needy Families (TANF) block grant and whether states are taking steps to prepare for the effects of future economic downturns on their welfare programs, focusing on: (1) how state budgetary resources, including federal aid, have been allocated since states have had access to TANF funds; (2) what plans states are making to ensure programmatic stability in times of fiscal and economic stress; and (3) the extent to which states have used, or plan to use, the program's federal Contingency Fund for State Welfare Programs and the Federal Loans for State Welfare Programs (Loan Fund) which are available for downturns or other emergencies affecting states. GAO noted that: (1) more federal and state resources are available for states' low-income family assistance programs since welfare reform passed in 1996 than would have been available under the previous system of financing welfare programs consolidated in the TANF block grant; (2) GAO's estimates showed that, taking caseload declines into account, 46 states would have more total resources--both state and federal--for their low-income family assistance programs than they would have had under the previous welfare programs; (3) states are transforming the nation's welfare system into a work-focused, temporary assistance program for needy families and generally chose to spend these resources to expand programs and benefits by shifting the emphasis from entitlement to self-sufficiency, enhancing support services, and increasing work participation rates; (4) states also have achieved budgetary savings by reducing state funds to the statutory maintenance-of-effort level of 75 or 80 percent of previous state spending levels; (5) while states have gained greater resources under the block grant, they also take greater responsibilities for fiscal risks should program costs increase in the future; (6) most states, including 7 of the 10 GAO visited, also have a general fund budget stabilization or rainy day funds that could be used to augment program spending during an economic downturn, but welfare programs would have to compete for these resources with other state funding priorities; (7) the Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) includes features that could provide federal funding to cover future increases in program needs; (8) states can carry forward unused TANF funds without fiscal year limitation; (9) as of September 30, 1997, states had left about $1.2 billion in unspent balances in their accounts with the U.S Treasury, or about 9 percent of the total grant; (10) it is unclear whether these balances will remain, shrink, or increase as states gain experience with the problem; (11) PRWORA also creates two federal safety-net mechanisms--the Contingency Fund and the Loan Fund--that were designed to provide states with access to additional funds during times of economic downturn or fiscal stress; (12) as of February 1998, neither the Contingency nor the Loan Funds had been used by any state; (13) officials in states GAO visited said they did not view the Contingency Fund as a viable source of additional resources; and (14) officials in many states GAO visited indicated they did not believe their states would borrow from the Loan Fund during an economic downturn.
DOD noted in its recommendation to the 2005 BRAC Commission that all military installations employ personnel to perform common functions in support of installation facilities and personnel and that all installations execute these functions using similar or nearly similar processes. DOD’s justification for the recommendation stated that this, along with the proximity of the bases in question, allowed for significant opportunity to reduce duplication and costs by consolidating the installations. Specifically, DOD stated that savings in personnel and facilities costs could be realized by, among other things paring unnecessary management personnel, achieving greater efficiencies through economies of scale, reducing duplication of efforts, consolidating and optimizing existing and future service contract requirements, establishing a single space management authority that could achieve greater utilization of facilities, and reducing the number of base support vehicles and equipment consistent with the size of the combined facilities. As a result, the BRAC Commission approved a modified version of DOD’s recommendation, and recommended combining 26 installations that were close to one another into 12 joint bases. In its January 2008 joint basing implementation guidance, OSD established a schedule dividing the joint bases into two implementation phases and required that the installations complete a memorandum of agreement that would describe how the military components would work together at each joint base.among other things, Each agreement was required to outline, how the installations were to fully implement the 2005 BRAC joint how the supporting component was to deliver installation support services to the other military components at the base—called supported components—in accordance with the joint base common standards. Table 1 identifies the location, implementation phase, and supporting military service at each of the joint bases. The 2008 joint basing implementation guidance designated the Under Secretary of Defense for Acquisition, Technology, and Logistics as the official within OSD responsible for establishing overarching guidance, procedures, and policy and for providing oversight for implementation of the joint basing guidance. Within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, the lead office for DOD’s installations and facilities is the Office of the Deputy Under Secretary of Defense (Installations and Environment), which conducts oversight of and provides guidance to the joint bases. OSD’s 2008 guidance on implementing joint basing established a set of installation support functional areas and provided for the creation of a set of joint base common standards to define the level of service expected to be provided at each joint base and in order to ensure consistent delivery of installation support services. As of April 2012, there were 280 joint base common standards grouped into 48 functional areas, such as the standard that 90 percent of law enforcement investigations be completed within 30 days, which falls under the security services functional area (see app. III for a complete list of these functional areas). Each joint base can seek approval to have deviations from the common standards, which would be outlined in its memorandum of agreement. One-third of the joint bases told us they had approved deviations from certain common standards. OSD officials stated that they have changed the joint base common standards over time to clarify or better align them with how the services are providing installation support services. The Joint Management Oversight Structure was established as a mechanism to provide for six levels of performance review and dispute resolution as part of managing implementation of the joint bases. Issues raised at the joint bases are first addressed at the lowest level of the structure, the local Joint Base Partnership Council, which includes officials from the supported and supporting services on each joint base. If issues are not resolved there, they are raised to higher levels of command, such as the Senior Installation Management Group, which includes the service installation commands, such as Commander, Navy Installations Command, and the Army Chief of Staff Installation Management Command. If the issues remain unresolved, they can go up through the service Vice Chiefs of Staff and finally on to OSD. See figure 1 for the oversight structure and decision chain. DOD’s recommendation to the 2005 BRAC Commission noted anticipated cost savings and efficiencies to be gained from joint basing, but OSD has not developed an implementation plan to guide joint bases in their efforts to achieve these cost savings and efficiencies. Furthermore, DOD does not have a reliable method of collecting information on the net costs or estimated savings, and efficiencies, specifically resulting from joint basing and excluding other influences on the bases’ budgets. Without a plan to guide and encourage joint bases to pursue cost savings and efficiencies and without a method to track joint basing-specific costs, savings, and efficiencies, DOD will likely miss opportunities for cost savings and continue to be unaware of the extent to which joint bases have been able to meet the objectives laid out in the 2005 BRAC recommendation on joint basing. Officials in the Office of the Deputy Under Secretary of Defense (Installations and Environment) said they did not have a plan in place to guide the efforts to achieve cost savings and efficiencies at the joint bases because joint basing is a relatively new initiative and they are still resolving implementation issues. DOD’s 2005 joint basing recommendation estimated a 20-year savings of $2.3 billion, with $601 million in savings by the end of the implementation period in fiscal year 2011. However, the 20-year savings estimate has now decreased by nearly 90 percent, to $249 million. We have previously reported that successful organizational transformations—such as merging components and transforming organizational cultures—in both the public and private sector, involve several key practices, including ensuring that top leadership drives the transformation, setting implementation goals and a timeline to show progress from day one, and establishing a communication strategy to create shared expectations and report related progress. Ensuring top leadership drives the transformation. DOD leadership has not provided clear direction to joint basing officials on achieving the cost savings and efficiency goals of joint basing. Some joint basing officials told us they perceived a lack of direction from OSD about the joint basing initiative and more specifically about whether the purpose of joint basing is to meet the joint base common standards for installation support or to achieve cost savings and efficiencies. These two goals may not always be in harmony since meeting some joint standards requires a higher level of service, which can increase costs rather than save money. Setting implementation goals and a timeline to show progress. One of DOD’s stated objectives for joint basing was to save money; however, it did not establish quantifiable and measurable objectives for how to achieve cost savings or efficiencies through joint basing, nor did it establish a timeline to achieve such goals. Such methods for achieving cost savings or efficiencies could include, for example, reducing duplication of efforts, paring unnecessary management personnel, consolidating and optimizing service contract requirements, and reducing the number of base support vehicles and equipment, among other things noted in DOD’s recommendation to the 2005 BRAC Commission. Establish a communication strategy. DOD has not established a communication strategy that provides information to meet the needs of joint basing officials on how to achieve the joint basing goals of cost savings and efficiencies. Some joint base officials told us that they desire additional guidance about how to achieve cost savings and efficiencies. In addition to not having an implementation plan, DOD does not yet have a fully developed method for accurately gathering information on costs, estimated savings, and efficiencies achieved specifically as a consequence of joint basing, and as a result it does not have an estimate of the extent to which joint basing has realized actual cost savings. OSD has developed a data collection tool, called the Cost and Performance Visibility Framework, through which the joint bases report installation support performance data, including annually reporting on funds obligated to provide base support services, and officials involved in management and oversight of the joint bases can use this information to improve joint base management. In addition, OSD can measure these data against the level of funding the military services expect they would have had to obligate for installation support on the joint bases if no savings resulted from joint basing—what DOD refers to as the Cost and Performance Visibility Framework baseline. However, because of inconsistencies in the way the joint bases reported data through the framework to date, and because the data reported through the framework do not exclude costs and savings that are not specific to joint basing, OSD is not yet able to accurately isolate the effects of joint basing on the cost of providing support services. In addition, comparing support service obligations to the Cost and Performance Visibility Framework baselines does not show whether overall savings were achieved as a result of joint basing since the new support service standards themselves are a part of the joint basing initiative. Measuring against these baselines therefore does not provide a true picture of savings resulting from joint basing. The Cost Performance and Visibility Framework is a web-based application managed by OSD which allows joint bases to report on their performance against the joint base common standards quarterly and to report on the funds obligated and manpower employed to meet the common standards annually. Various levels of the joint basing Joint Management Oversight Structure use the framework as a management tool to review and assess performance of the joint base common standards by category, service, and base, including comparing performance of the standards to the funds obligated and manpower employed to meet particular categories of standards. For example, officials can compare the funds obligated on housing on a particular joint base with the extent to which that joint base met the common standards related to housing, as well as the baseline, or anticipated cost of meeting those common standards. OSD officials told us that they use these data to identify categories of joint base common standards where the bases are performing especially well or poorly, and can compare this performance to the funds obligated relative to achievement of the standards, as well as to the baseline—the level of funding the military services anticipated they would need to obligate to meet the standards. This information provides an initial insight and a basis for further discussion at the working level with officials involved in joint base management and oversight. Through further discussion, the officials said they were able to identify the reasons why joint bases may be performing well or underperforming in particular areas relative to the funds obligated and the baseline. In turn, this allows the officials to make adjustments in funding, learn from the experiences of particular joint bases in providing support services, and improve joint base management going forward. For fiscal year 2011, the first year all of the joint bases had completed implementation, the joint bases reported through the Cost and Performance Visibility Framework obligating a total of about $4.3 billion on support services. The military services also created baselines against which to measure these funding levels. According to these service- developed baselines the 12 joint bases’ installation services were expected to cost $5.1 billion in fiscal year 2011, as compared with the framework-reported actual cost of about $4.3 billion, for a reported savings of $800 million less than the baseline. However, this difference between the reported baselines and the installation support funding levels on the joint bases does not accurately reflect savings arising from joint basing for several reasons. First, these baselines were calculated using actual obligations in fiscal year 2008, when the joint bases were standalone bases, and were adjusted to include increases in personnel needed to meet the new joint base common standards and other expected changes, such as utility rate changes. This effectively inflated the baselines beyond what was actually obligated prior to joint basing. Therefore, while the adjusted baselines are meant to represent the projected costs to operate the newly established joint bases, they overstate the actual cost to operate the bases as compared to when they were standalone bases. As a result, these are not true baselines against which a valid comparison can be made of the cost to operate joint bases compared with standalone bases. Moreover, DOD officials noted that the adjusted baselines and the reported obligations did not always exclude one-time expenditures unrelated to the cost of providing support services, such as military construction projects, which impairs the reliability of comparisons using the obligations data. Finally, the framework does not identify when costs, savings, or efficiencies occurred specifically as a result of joint basing, as opposed to other actions such as military service- wide budget cuts. Therefore, the absence of a comparison with the funds obligated for support services on the installations prior to becoming joint bases, reliability problems in the data, and the inability to isolate joint- basing specific costs, savings, and efficiencies, limits the use of the framework as a definitive tool to identify the overall effects on cost of the joint basing initiative. OSD officials said that they expect to correct the data reliability problems by the end of fiscal year 2012, and as joint basing continues these officials believe it will be possible to compare each year’s obligations at the joint bases against prior years’ obligations and therefore gain insight into the extent that savings and efficiencies are achieved. However, DOD officials also acknowledged that other factors have affected and will continue to affect funding levels at the joint bases, including budget- driven reductions by the military services that do not necessarily represent savings or efficiencies specifically from joint basing, and as a result, OSD may not be able to determine joint basing-specific costs and estimated savings even with its improved data collection. We found that the individual joint bases do not systematically track cost savings and efficiencies achieved as a result of joint basing. However, some joint bases have achieved efficiencies through consolidating service contracts, combining departments, and reducing administrative overhead, and identified anecdotal examples of such efficiencies, including the following. Joint Base McGuire-Dix-Lakehurst. Base officials told us that by combining telephone services under the existing Air Force contract, call rates were substantially reduced, and that they have saved about $100,000 annually as a result. Additionally, the officials said that consolidating nine maintenance support contracts into one has produced $1.3 million in annual savings. Joint Base Charleston. Base officials stated that information technology network upgrades resulted in improved high-speed access and annual savings of $747,000. Additionally, these officials told us that they consolidated multiple contracts for chaplains, resulting in $55,000 in annual savings. Joint Base Pearl Harbor-Hickam. Base officials told us that they have realized efficiencies and cost savings through consolidating some offices in their Morale, Welfare & Recreation Departments. Through this effort, they saved about $400,000 in fiscal year 2011 and expect those savings to increase in subsequent years. Conversely, some joint basing officials have told us that the joint basing initiative may be increasing rather than cutting costs because in some cases the new joint base common standards require a higher level of support than was previously provided by service-specific standards. As previously noted, we reported in 2009 that the new joint base common standards required the services to fund installation support at higher-than- previous levels. Even with the achievement of some efficiencies, the joint bases lack clear direction and impetus to identify and execute cost-saving measures because OSD has not established an implementation plan with measurable goals to track progress toward meeting the cost savings and efficiencies goals that it recommended to the 2005 BRAC Commission. In the absence of such a plan, opportunities for savings and efficiencies are likely to be missed. In addition, without a reliable method to collect data on costs or estimated savings resulting specifically from joint basing, DOD cannot identify the net savings, if any, associated with joint basing. As a result, DOD will likely remain unable to quantify the effects of the joint basing initiative and unable to evaluate whether to continue or expand joint basing. In fiscal years 2010 and 2011 the joint bases reported meeting the common standards more than 70 percent of the time. However, the lack of clarity in some standards, the fact that unclear standards are not always reviewed and changed in a timely manner, and the fact that the data collection and reporting on the standards in some cases adhere to individual service standards rather than the common standard hinders the effectiveness of the standards as a common framework for managing installation support services. Without a consistent interpretation and reported use of the standards, the joint bases will not have reliable and comparable data with which to assess their service support levels, and OSD cannot be assured of receiving reliable and comparable data on the level of support services the joint bases are providing. According to OSD guidance, DOD developed the standards to provide common output or performance-level standards for installation support, and to establish a common language for each base support function on the joint bases. These common standards provide a common framework to manage and plan for installation support services. In quarterly reporting from 2010 and 2011 using the joint basing Cost and Performance Visibility Framework, the joint bases and various offices within the joint bases reported on whether they met the established common standards or whether the standard was either not applicable to them or not reported by them. In eight quarters of reporting, the 12 joint bases and various offices within the joint bases submitted over 53,000 reports on standards. Our analysis showed that 74 percent of these reports stated that the joint base or office met the standard, and 10 percent of the time the joint base or office did not meet standard. The other 16 percent of the time the joint bases or office reported that the standard was either not applicable to the particular joint base or office, or that the joint base or office did not report on the standard. The functional areas of standards the joint bases most frequently reported not meeting, according to our analysis of the joint base performance reporting data, included the following. Information technology services and management. This includes such areas as telephone services and video teleconference. Facilities sustainment. This includes certain building restoration, modernization, and maintenance. Command management. This includes such areas as postal services and records administration services. Emergency management. This includes such areas as emergency notification and emergency training. Base support vehicles and equipment. This includes shuttle bus services, and vehicle and equipment maintenance. Based on our analysis of the reasons joint base officials reported to OSD for not meeting standards, we found that the joint bases reported a range of reasons for not meeting a given standard, such as a lack of personnel or resources, as well as the inability to meet the standard because of contract-related resourcing issues. For example, the joint base may have a contract in place for providing multimedia services, but the contract does not provide for video production, and therefore the base chooses not to meet the common standard because it would be too costly to modify the contract or let an additional contract. The most common reasons joint bases reported as to why the standard was not met, as determined by our analysis, are shown in figure 2. In addition to the ability of the joint bases to meet the standards, joint base officials and our analysis of the comments in the common standard reporting system identified three main issues affecting the joint bases’ ability to interpret and report on base support services, regardless of whether the standards are met. These are (1) the standards are in some cases unclear, (2) the standards are not reviewed and changed in a timely manner when clarity issues arise, and (3) data in some cases are still collected in a service-specific manner that does not correspond to the common standard, or the bases are reporting according to a service- specific rather than a joint standard. According to joint base officials, the joint base common standards in some cases are not measurable or clear. We have previously reported that key attributes of successful performance measures include a measurable target and clarity. Having a measurable target in a performance measure ensures the ability to determine if performance is meeting expectations. Clarity of a performance measure means that the measure is clearly stated and the name and definition are consistent with the methodology used to calculate it, so that data are not confusing and misleading to the users of the data. Joint basing officials provided many examples of standards that lack clarity and therefore cause uncertainty in how the standards should be reported, including the following: One common standard requires that 100 percent of installations meet a DOD requirement for at least annual exercise testing of mass warning and notification systems. However, according to officials at Joint Base Andrews-Naval Air Facility Washington (in Maryland), there are many modes of emergency management notification and many ways to test these modes. As a result, they are unsure about how to adequately answer this common standard and therefore report it as not met. One common standard relating to awards and decorations to recognize individual and unit achievements states that 90 percent of awards should be posted to personnel records in accordance with service-specific timeliness standards. However, the standard is not clear because, according to joint base officials, not all of the services have applicable timeliness standards. According to comments accompanying common standard reporting from officials at Joint Base San Antonio and Joint Region Marianas, no service standard defines when a posting is late, and therefore they consider this standard to always be met, regardless of when awards are posted. One common standard requires that 60 percent of certain service vehicles be repaired within 24 hours. However, officials at Joint Base McGuire-Dix-Lakehurst said the standard was unclear because it does not take into account the priority of the vehicle. Therefore, for the purposes of the standard, a vehicle that is essential to accomplishing the base’s mission would need to be fixed within the same time frame as a non-mission-essential shuttle bus that transports personnel around the base. One common standard related to investigations and crime prevention requires joint bases to maintain 7 days’ processing time for law enforcement information to meet legal and command requirements for adjudication and action. However, according to officials at Joint Base McGuire-Dix-Lakehurst, this standard does not specify whether the timeline is in calendar or business days. In the absence of clarification, the joint base has marked the standard as met. According to GAO’s Standards for Internal Control in the Federal Government, information should be recorded and communicated to management and others within a time frame that enables them to carry out their responsibilities. However, according to officials at several joint bases, the OSD process to review and clarify standards does not update standards in a time frame to allow joint bases to accurately report each quarter on those standards that are unclear. OSD conducts a review of selected functional areas each year. As an example, for its most recent review for fiscal year 2012, conducted in February 2012, OSD selected the facility operations, facility investment, and information technology services management as the functional areas for review. Changes made to these standards took effect in April 2012. Joint base officials stated that since OSD selects certain functional areas to review each year and does not review standards outside those particular functional areas, standards in those functional areas that are not selected are not reviewed and clarified even though clarification in those areas may be necessary. OSD officials told us that in their most recent review, they used input from the joint bases, military services, and functional area experts within OSD to determine which functional areas of standards to review, among other inputs, such as which of the standards bases were most frequently not meeting. However, since OSD does not necessarily select all those standards to which joint bases have requested clarification and only reviews standards for possible updating once a year, changes to the standards are not implemented in time for the next quarterly reporting cycle and joint base officials in some cases are required to continue collecting data on and reporting on standards that they have difficulty interpreting. The joint bases do not always report on the common standards in ways that produce similar results because in some cases they are using service-specific data collection methods that are unable to provide information on whether the joint standard is being met, and in some cases they are reporting on service-specific performance measures rather than the joint standard. We have previously reported that to achieve reliability in performance reporting, measurements must apply standard procedures for collecting data or calculating results so that they are likely to produce the same results if applied repeatedly to the same situation. The following are instances when joint bases may rely on data that do not support reporting on the joint base common standard or where joint bases are adhering to an individual service standard rather than the common standard. One common standard states that joint bases should maintain a clean and healthy environment by cleaning certain restrooms three times a week, and should sweep and mop floors, vacuum carpets, remove trash, and clean walk-off mats once a week; buff floors monthly; and maintain/strip floors and shampoo carpets annually. Officials at Joint Base McGuire-Dix-Lakehurst reported not meeting the common standard because the Air Mobility Command method for data collection differs from the information needed to report on the common standard. Therefore, the joint base could be meeting the standard, but officials do not know because they are not collecting the data required to identify whether they are doing so. One common standard related to technical drawings requires that 98 percent of requests for location data result in no incidents of misidentified data. Officials at Joint Base Pearl Harbor-Hickam reported not meeting the common standard, stating that they were not tracking this metric because the Air Force did not independently require it and they were therefore unable to know whether they met the metric. One common standard requires that 100 percent of joint bases hold emergency management working group meetings quarterly. Joint Base San Antonio officials reported not meeting the common standard because the base is instead holding semiannual emergency management working group meetings, which officials said is in accordance with Air Force policy. Because some of the standards are not clear and are not reviewed and changed in a timely fashion and in some cases the joint bases use service-specific data and standards rather than the joint standard, the common standards do not provide OSD and the joint bases with a common tool to ensure that the joint bases are interpreting and reporting on the standards consistently. As a result, it is not clear to what extent the joint bases are achieving the intent of the common standards, even though the joint bases report meeting the standards the majority of the time. Without a consistent interpretation and reported use of the standards, the joint bases will not have reliable and comparable data with which to assess their service support levels, and OSD cannot be assured of receiving reliable and comparable data on the level of support services the joint bases are providing. OSD and the joint bases have various mechanisms in place to address challenges in achieving joint basing goals, but these mechanisms do not routinely facilitate the identification of common challenges among the joint bases or the development of common solutions to these challenges. Specifically, we found that the joint bases do not have a formal method of routinely sharing information among the joint bases on identified challenges and potential solutions or guidance on developing and providing training for new joint base personnel on how the joint bases provide installation support services. Without processes to identify common challenges and share information across the joint bases, and guidance on delivering consistent training to new personnel, DOD will likely miss opportunities to efficiently develop common solutions to common challenges and to reduce duplicating efforts to provide training to new personnel. OSD and the joint bases have several mechanisms in place to address challenges in consolidating installation support services at the joint bases. These include a multi-level management structure for the joint bases, annual review meetings, performance reporting, newsletters, and informal communications, as follows. The Joint Management Oversight Structure. According to DOD guidance, challenges at the joint bases in consolidating installation support services should be addressed at the lowest possible level of the Joint Management Oversight Structure—the local joint base partnership council. Most problems are addressed between command components at an individual joint base, or by intermediate service commands, such as the Army’s Installation Management Command, according to joint base officials. Annual management review meetings between OSD and the joint bases. As part of its management of the joint bases, OSD holds an annual meeting each February in which joint base commanders brief OSD on the status of the bases’ consolidation and any challenges that the bases may or may not have been able to address. Joint base common standards performance reporting. The joint bases report on a quarterly basis on whether they met the common standards. As part of this reporting, the bases can provide comments identifying challenges they faced in meeting particular standards. Joint base newsletters. OSD publishes a monthly newsletter about and for the joint bases. This newsletter highlights changes to joint basing processes, common challenges, lessons learned, and policy issues affecting joint bases. For example, the March 2011 newsletter noted that Joint Base San Antonio had combined the best practices of the various military services in consolidating motorcycle safety training. Informal communications. Joint base officials told us that they sometimes communicate implementation challenges directly to OSD officials by e-mail or telephone in order to request assistance or guidance. In meetings and written responses, joint base officials reported facing a variety of challenges in implementing joint basing as well as implementing the specific common standards. These challenges cover a wide range of issues, from differing expectations among the military services about how particular base support services should be provided to the incompatibility of information technology systems. The following examples illustrate the range of problems joint bases have faced. Differences in how the military services conduct snow removal have led to unexpected effort or cost for some supported components. Joint Base McGuire-Dix-Lakehurst officials told us that when the Air Force took over providing the support for the joint base, Army and Navy personnel were surprised when they had to shovel the sidewalk around their buildings because previously this service was provided by the base. By contrast, the officials said that the Air Force removes snow from roads and parking lots on base but not from sidewalks and building paths. The officials told us they had to spend additional money to contract for snow removal on sidewalks or use their own personnel to remove the snow, which diminished productivity of mission functions. While there is no joint base common standard specifically on snow removal, there is one on pavement clearance, which includes snow and ice removal, which states that joint bases should have an installation pavement clearance plan developed in accordance with best practices of the military components to meet safety and mission needs. Notes accompanying the standard state that each joint base defines its own best practices. Services had different expectations for maintenance of building components such as alarm systems and fire extinguishers. For example, Navy officials on Joint Base McGuire-Dix-Lakehurst told us that previously, the Navy installed security systems and replaced fire extinguishers as part of base support services. However, following joint basing and the installation becoming part of an Air Force- supported base, the Air Force did not provide these services and expected building occupants to fund these services themselves. Some supported components and tenant organizations are experiencing changed expectations and increased costs under the joint base structure, in part because of differences in the way the military services budget and pay for installation support. For example, officials of a Joint Base Pearl Harbor-Hickam tenant told us that their costs rose significantly following the transition to the joint base in order to cover expenses, such as telephone service, not previously required under the tenant’s own budget. In addition, the tenant officials stated that the different service standards under the Navy had raised their expenses. The variety of incompatible information technology networks and other systems among the services inhibits communication and requires additional effort. For example, the absence of common information technology and communications networks hampered communications and information sharing between joint base occupants, and the bases expended significant efforts transitioning data from one service system to another. Officials at a number of joint bases stated that they believe the individual efforts and relationships developed between the components and commands at the joint bases have facilitated consolidation of installation support services and resolution of implementation challenges. However, a number of joint base officials noted that there was no systematic process in place to identify and resolve common challenges and share information with new base personnel. OSD and the joint bases have some methods to address challenges in consolidating support services, but the absence of a method for routinely communicating among the joint bases limits opportunities to jointly identify common challenges to joint basing implementation and share best practices and lessons learned in order to develop common solutions to those challenges. Because problems are first identified and addressed at the lowest level of the Joint Management Oversight Structure, which only includes officials from a given joint base, other joint bases do not become aware of these problems or the associated solutions. If joint bases are not informed of problems at other joint bases, then they cannot work together and collectively elevate issues to OSD for the purposes of identifying best practices and disseminating them to the joint bases. One joint base official noted that the information contained in the newsletters does not represent formal guidance. In addition, some joint base officials said that the annual program management reviews conducted by OSD are not sufficient to respond to day-to-day challenges faced at the joint bases. Joint base officials told us that in some cases they have obtained needed guidance through informal contacts with OSD. However, they noted that a formal, routine method of sharing information received from these sources would help to ensure consistent performance across the joint bases. Without such guidance and a mechanism to routinely share lessons learned across the joint bases, opportunities will be missed to work together to resolve common challenges and reduce duplication of effort, and the potential that joint bases may be implementing policies inconsistently will increase. In addition, OSD has not provided guidance to the joint bases on developing training materials to be used to inform incoming personnel about the specifics of how installation services are provided on joint bases. Such guidance is needed since joint base standards may differ from standards and approaches used on standalone bases. Some components, such as the Air Force Wing Command at Joint Base Pearl Harbor-Hickam, developed their own briefings or training courses to provide information on the process of requesting and receiving installation support services and how the process is different from that of other Air Force bases. Some joint base officials stated that educating personnel about joint base-specific processes requires a great deal of effort. Because of the lack of OSD guidance on providing common training materials, the joint bases have in some cases developed their own materials, which can result in duplication of efforts and inconsistencies across the joint bases. DOD recommended consolidation of installations into joint bases to the 2005 BRAC Commission to, among other things, reduce duplication of management and installation support services, resulting in potential efficiencies and cost savings. GAO’s Standards for Internal Control in the Federal Government states that the policies, procedures, techniques, and mechanisms that enforce management’s directives are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. It also states that for an entity to run and control its operations, the entity must have relevant, reliable, and timely communications relating to internal events, and that information is needed throughout the agency to Without a means of identifying common achieve all of its objectives.challenges and sharing best practices and lessons learned in order to identify common solutions, DOD is likely to miss opportunities to efficiently resolve joint base challenges using common methods. In addition, without sharing guidance for new personnel, some joint bases will duplicate efforts to solve problems previously encountered elsewhere and be unable to provide uniform policies across joint bases. Since 2008, OSD has consolidated installations in proximity into joint bases and established common standards for delivering installation support services at these bases. As DOD stated in its recommendation to the 2005 BRAC Commission, DOD anticipated that this effort represented a significant opportunity to reduce duplication of effort and achieve efficiencies and cost savings across the 12 joint bases. However, to date OSD has not developed and implemented a plan to guide the joint bases in achieving cost savings and efficiencies. OSD has developed and implemented a framework for collecting and reporting data on performance of joint base common standards and the funds spent and manpower used to meet those standards. However, OSD has not yet developed this framework to the point where it can isolate the costs, savings, and efficiencies resulting specifically from joint basing, excluding non-joint basing actions and using reliable data. Without this information, OSD is not in a position to know to what extent DOD has made progress toward achieving the joint basing objectives, and will be unable to evaluate whether to continue or expand joint basing. Additionally, a lack of specificity and clarity within the joint base common standards, the long process to review and adjust the standards, and the absence of consistently reported data hinder the standards’ effectiveness as a common framework or tool for managing support services. Without a consistent interpretation and reported use of the standards, OSD and the joint bases cannot ensure that they are receiving reliable and comparable data on the level of support services provided, and as a result will not have information necessary to make informed resource allocation decisions so that joint base services are delivered consistently. While OSD and the joint bases can identify challenges in implementing the joint bases, OSD has no common strategy to ensure that the joint bases routinely share information with each other on best practices and lessons learned in order to resolve common challenges. Finally, OSD has not provided guidance to ensure that bases provide consistent information to new joint base personnel to better inform them as to procedures for obtaining support services on joint bases. Without taking further steps to address these issues, DOD will likely miss opportunities to achieve cost savings and efficiencies, provide consistent levels of support services, and to work together to resolve common challenges and reduce duplication of effort across the joint bases. To enable DOD to achieve cost savings and efficiencies and to track its progress toward achieving these goals, we recommend that the Secretary of Defense direct the Deputy Under Secretary of Defense (Installations and Environment) to take the following two actions: Develop and implement a plan that provides measurable goals linked to achieving savings and efficiencies at the joint bases and provide guidance to the joint bases that directs them to identify opportunities for cost savings and efficiencies. DOD should at a minimum consider the items identified in its recommendation to the 2005 BRAC Commission as areas for possible savings and efficiencies, including paring unnecessary management personnel, consolidating and optimizing contract requirements, establishing a single space management authority to achieve greater utilization of facilities, and reducing the number of base support vehicles and equipment. Continue to develop and refine the Cost Performance and Visibility Framework in order to eliminate data reliability problems, facilitate comparisons of joint basing costs with the cost of operating the separate installations prior to implementing joint basing, and identify and isolate the costs and savings resulting from actions and initiatives specifically resulting from joint basing and excluding DOD- or service-wide actions and initiatives. To improve DOD’s ability to provide a common framework for the management and planning of support services at the joint bases, we recommend that the Secretary of Defense direct the Deputy Under Secretary of Defense (Installations and Environment) to take the following two actions: Direct the joint bases to compile a list of those common standards in all functional areas needing clarification and the reasons why they need to be clarified, including those standards still being provided or reported on according to service-specific standards rather than the common standard. Amend the OSD joint standards review process to prioritize review and revision of those standards most in need of clarification within this list. To increase opportunities for the joint bases to obtain greater efficiencies in developing common solutions to common challenges and reduce duplication of efforts, we recommend that the Secretary of Defense direct the Deputy Under Secretary of Defense (Installations and Environment) to take the following two actions: Develop a common strategy to expand routine communication between the joint bases, and between the joint bases and OSD, to encourage joint resolution of common challenges and sharing of best practices and lessons learned. Develop guidance to ensure all the joint bases develop and provide training materials to incoming personnel on how installation services are provided on joint bases. In its comments on a draft of this report, DOD stated that it does not agree that at this point in the joint bases’ development that the department should establish savings targets because they would be premature and arbitrary. DOD partially concurred with the remainder of our recommendations; however, in most instances, DOD did not identify what, if any, actions the department plans to take to implement the recommendations. DOD’s comments are reprinted in their entirety in appendix IV. DOD did not concur with our first recommendation, to develop and implement a plan to provide measurable goals linked to achieving savings and efficiencies at the joint bases and provide guidance to the joint bases directing them to identify the savings and efficiencies. In its comments, DOD said such targets would restrict the authority of local commanders to manage the merger of the formerly standalone bases into joint bases. DOD also stated that while savings targets may be appropriate in the future, imposing savings goals would restrict the authority of the joint base commanders and burden them while implementing new organizational structures, which would unnecessarily risk negative impacts to mission support when operational effectiveness of the bases is paramount. Moreover, DOD stated that the department should continue its approach of being patient with obtaining savings and efficiencies at joint bases because this approach is working. DOD cited two cost-savings examples through personnel cuts achieved in fiscal years 2012 and 2013: the Air Force reduced civilian positions for all the joint bases for which it is the lead, and the Navy chose to not fill all of its civilian vacancies. Finally, DOD stated that the creation of the joint bases from separate installations is equivalent to the mergers of corporations with very different financial systems, management structures, operating procedures, and cultural differences. DOD has decided it is important to empower each joint base commander to design, implement, and adapt cost efficient and effective approaches to their unique situations while adopting new and cross- cutting business practices, thereby making them incubators of innovation. Therefore, DOD has decided to allow for an extended transition period and defer near-term savings. We acknowledge that establishing joint basing is a complex undertaking, but DOD’s current position of taking a patient approach and deliberately deferring near-term savings contradicts the position it took when requesting the BRAC Commission to approve its joint basing recommendation. Specifically, in its justification to the Commission (published in our report as appendix II), DOD stated that joint basing would produce savings exceeding the cost of implementation immediately. Moreover, as our report clearly points out, DOD projected 20-year net present value savings of over $2.3 billion although the current 20-year net present value savings estimate is now about $249 million—a decrease of about 90 percent. DOD also asserted that it is achieving savings, as shown by the Air Force and Navy manpower reductions at the joint bases. However, these cuts were not the result of a purposeful effort to pare unnecessary management personnel due to the implementation of joint basing. Air Force and Navy documents and interviews with officials from these services indicate that the joint bases’ memoranda of agreement show increases in budget and civilian manpower required as a result of joint basing. Any reductions in civilian positions at the joint bases through attrition or leaving unfilled positions open are attributable to general service-wide initiatives and reductions and not joint basing The Secretary of Defense’s justification to the BRAC efficiencies.Commission requesting approval of the joint basing recommendation stated that “there is a significant opportunity to reduce duplication of efforts with resulting reductions of overall manpower and facilities requirements capable of generating savings.” We continue to believe that DOD’s justification for joint basing—the realization of savings—is attainable by developing guidance and encouraging appropriate practices, goals, and time frames. Therefore, we continue to believe our recommendation is warranted. DOD partially concurred with our second recommendation, to continue to develop and refine the Cost Performance and Visibility Framework in order to (1) eliminate data reliability problems, (2) facilitate comparisons of joint basing costs with the cost of operating the separate installations prior to implementing joint basing, and (3) identify and isolate the costs and savings resulting from actions and initiatives specifically resulting from joint basing and excluding DOD or service-wide actions and initiatives. DOD stated that its Cost Performance and Visibility Framework already provides a method to collect quarterly data on performance towards the Common Output Level Standards, annual data on personnel assigned, and funds obligated for each joint base. However, DOD also acknowledged that there were inconsistencies in the current data captured in the Framework and that DOD is working through and improving its data reliability. DOD stated that it invested considerable effort to clarify this data and expected to have sufficient data to begin assessing joint base efficiencies by the end of fiscal year 2012. It stated that then it would be able to compare the current fiscal year financial and performance data to the baseline and previous year’s obligations. DOD also stated that it could perform an additional analysis to compare the joint bases’ baseline data with the costs of operating the separate installations prior to implementing joint basing because this information is included in annex U of each joint base’s memorandum of agreement. However, DOD also acknowledged that this comparison still would not be able to identify cost savings resulting solely from joint basing and asserted that it is impractical to isolate and distinguish joint basing cost savings from the savings that result from DOD- or service-wide actions using the data contained in its Framework. Furthermore, DOD pointed out that it did not believe that accounting systems are designed to track savings, rather they are designed to track expenses and disbursements, which DOD stated in its comments is what we concluded in a 1997 report. We also see that the Cost Performance and Visibility Framework represents a good start on development of a system to measure joint basing performance. However, as it was being used at the time of our review, and as we clearly state in the report, it was not adequate to reliably identify any savings. First, DOD’s proposed analysis of comparing current operating costs to the baseline would not result in an accurate assessment of savings from the joint bases because DOD has included in the baseline the higher costs of implementing the higher joint basing standards, such as expected increases in personnel and higher utility rates. The baseline would not accurately reflect the cost of the standalone bases prior to the joint basing initiative. Therefore, while this analysis might show some bases spending less than the inflated baseline, it would not show if they are spending less than what they spent as standalone bases. Second, DOD’s proposed analysis to compare the current cost of joint basing documented in its framework to the cost of standalone bases as captured in annex U of the memoranda of agreement as currently planned would also produce inaccurate results. As DOD stated, this analysis would not be able to isolate any savings specific to joint basing since some savings have been made that are not directly attributable to joint basing such as the general service personnel reductions. Third, the memoranda of agreement annexes U do not consistently and clearly show the costs of operations of each base prior to joint basing and the respective transfers of funds between the services, rendering them unreliable for this analysis. Finally, we agree with DOD’s statement that our 1997 report concluded that the department’s accounting systems are not designed to track savings. However, it is for this reason that we also concluded in our 1997 report that “the absence of efforts to update projected savings indicates the need for additional guidance and emphasis from DOD on accumulating and updating savings data on a comprehensive and consistent basis,” and we so recommended it then. As we believed in 1997 and continue to believe, DOD needs to improve its ability to update savings from BRAC recommendations. Refinements to the Cost Performance and Visibility Framework would position the department to effectively measure savings from joint basing, and therefore the need for our recommendation remains. DOD partially concurred with our third and fourth recommendations—to compile a comprehensive list of common standards needing clarification and to prioritize the review and potentially revise those standards within that list, respectively—and stated that there is already a quarterly feedback process on the joint base common standards and an annual review process that incorporates input from the joint bases. Specifically, DOD stated that standards may need changing as priorities change and missions evolve, but that the current process strikes an appropriate balance between the analytical burden of repeated reviews with the need for clarity and refinement. DOD also stated that it believes that reviewing all the standards simultaneously does not allow for the depth of analysis required to make sound decisions. DOD suggested that GAO should conduct a qualitative assessment of the standards because our findings on the need to revise its process for reviewing and clarifying its standards appear to be based on an anecdotal assessment. While we agree with DOD that the standards need to be continually reviewed and adjusted as priorities and missions change, we found ample evidence that the individuals that report on the joint bases’ ability to meet the current standards believe some of the standards need clarification now, and that in many instances, these officials believe it is unclear what some of the standards are measuring. It is important to note that nothing in our recommendation requires DOD to review all the standards simultaneously. To the contrary, our recommendation specifically states that DOD should compile a list of standards needing clarification. In fact, because DOD has not issued any guidance to prioritize the standards, joint bases continue to report on and provide resources toward reporting on all the standards whether they are problematic or not. Lastly, DOD stated that they believed our evidence was based on an anecdotal assessment. We disagree. We conducted a comprehensive qualitative review of over 59,359 comments entered into the Cost Performance and Visibility Framework from fiscal years 2009 through 2011 and categorized them into broad themes of issues raised by the bases in reference to the Common Output Level Standards. As shown in figure 2 of our report, the need for clarity of the Common Output Level Standards was raised over 200 times by the joint bases during this timeframe. However, because DOD’s data is not adequate to permit us to specifically identify what types of clarification problems were being encountered by the bases, we supplemented our analyses with follow-up interviews to provide anecdotal examples that added some context to our analyses and described a few of the types of problems encountered. Moreover, our data suggested that DOD’s quarterly process had proven ineffective at addressing the need for clarification and review of problematic standards since some standards continue to be problematic despite the quarterly reviews which DOD asserts are working. For these reasons, we continue to believe that improvements are needed in DOD’s current process for reviewing and clarifying the common standards to address the bases reported concerns. DOD partially concurred with our fifth recommendation, to develop a common strategy that expands routine communication between the joint bases, and between the joint bases and OSD, to encourage joint resolution of common challenges and sharing of best practices and lessons learned. DOD stated that it believed there are already mechanisms in place to facilitate routine communication between the joint bases, as well as between OSD and the joint bases, and that it is increasing those opportunities. DOD listed the various opportunities it has for sharing joint basing information, all of which we are aware: The military services have routine communication with the joint bases and are the lead to encourage joint resolution of common challenges and sharing of best practices. DOD chairs a working group twice a month where headquarters service representatives offer information and ideas generated during internal service meetings with joint bases. Best practices from the bases are shared in a periodic newsletter. OSD and the military services conduct joint base site visits each year to capture any opportunities for improvement and hosts an annual management review meeting each year with the joint base commanders. While we recognize that DOD has facilitated communication of lessons learned and best practices, as we note in our report, because different services have the lead role in providing support services at different joint bases, best practices are not necessarily shared with all the bases across the services. DOD’s joint basing policy states that problems at the joint bases should be identified and addressed at the lowest possible level, which can include only officials at any given joint base. Thus, the majority of these issues may not be elevated to the working group but may still occur at multiple joint bases leading to duplication of effort in resolving common problems experienced in multiple locations. Moreover, those issues that are not elevated to the working group may never be relayed to other joint bases since there is no explicit policy or process in place to do so. The newsletters, which we discuss in our report, only convey a limited number of best practices, and exclude problems and solutions identified in the course of implementing joint basing. Additionally, contributions to the newsletters are not required and are not always comprehensive. Moreover, the contributions tend to highlight best practices which are good but exclude unsolved challenges, which if shared, could result in the bases jointly resolving problems or elevating them when needed to more senior leadership. As a result, some joint base officials told us that they found the newsletters to be of limited usefulness. For these reasons, we continue to believe that the joint bases could benefit from routine communication that allows them to commonly and routinely share identified challenges and possible solutions, rather than having such communication occur only sporadically or be filtered through the higher levels of the oversight structure put in place by OSD. DOD partially concurred with our sixth recommendation, to develop guidance that would ensure all joint bases develop and provide training materials to incoming joint base personnel. DOD stated that the department will ensure each of the services is providing training materials to incoming personnel; however, joint base commanders need flexibility to tailor training to the needs of their installation. We agree that the commander of each joint base needs the flexibility to provide joint base- specific training. The intent of our recommendation is that in addition to establishing a requirement that joint bases develop training guidance and ensure training occurs at each base, OSD’s guidance should encourage the sharing of training materials across bases to reduce duplication of effort, promote commonality where appropriate, and provide a means of potentially sharing best practices. Our recommendation was not intended to require standardized training at each location. Therefore, we continue to believe that OSD-level guidance for joint bases to develop and provide training to incoming personnel is necessary to help the joint bases facilitate the provision of services on the bases and may provide a way to reduce duplication of effort and more effectively share information. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Deputy Under Secretary of Defense (Installations and Environment); the Secretaries of the Army, Navy, and Air Force and the Commandant of the Marine Corps; the Director, Office of Management and Budget; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. In order to assess the extent to which the Department of Defense (DOD) developed and implemented a plan to achieve cost savings and efficiencies at the joint bases and tracked the costs, savings, and efficiencies resulting from joint basing, we analyzed DOD guidance related to joint base implementation, specifically looking for any measures or reporting processes on efficiencies and cost savings. We also reviewed our prior findings on key practices and implementation steps for mergers and organizational transformations. We interviewed DOD officials at the service headquarters and the Office of the Secretary of Defense (OSD) to obtain information about cost savings, joint basing budget data, and guidance related to cost savings and efficiencies. We also interviewed joint basing officials at three joint bases and obtained answers to written questions from the remaining nine joint bases that we did not visit in person to obtain information on actual cost savings and efficiencies achieved and guidance and communication related to cost savings and efficiencies. We selected a nonprobability sample of three site visit locations based the following factors: (1) we chose to visit one base where each military department (Army, Air Force and Navy) had the lead responsibility for providing installation support, (2) we considered geographic diversity, (3) we chose to visit at least one base that we did not visit for our 2009 joint basing report, (4) we selected at least one joint base from each of the two phases of joint base implementation, and (5) we chose joint bases where the installations that had been combined into the joint base were directly adjacent to each other. Based on these factors, we chose to visit Joint Base McGuire-Dix-Lakehurst, Joint Base Lewis-McChord, and Joint Base Pearl Harbor-Hickam. To evaluate the extent to which joint base common standards have provided a common framework for defining and reporting installation support services, we reviewed DOD policy and guidance related to the common standards; the standards themselves, including both functional areas and specific standards; and federal internal control standards and key elements of successful performance measures.reviewed the joint bases’ reporting on the joint base common standards for fiscal years 2010 and 2011. To determine the degree to which the standards were achieved, we analyzed the data to determine how many standards were met, not met, or determined to be not applicable. We conducted a content analysis of the comments accompanying the standards reporting from fiscal years 2010 to 2011 to identifying concerns regarding the various standards. In conducting this content analysis, we reviewed comments accompanying all reported standards, including those reported as met, not met, and not applicable. Using this analysis, we identified the most frequent reasons the joint bases provided for not meeting the standards, as well as challenges the joint bases faced in implementing and reporting on various standards. To conduct the content analysis, two analysts individually coded all comments accompanying the standards reporting into one of the 17 categories listed in table 2. After the comments were coded, a third analyst adjudicated any differences between the coding of the first two analysts. Appendix II: BRAC Commission Recommendation on Joint Basing (Including Elements of DOD’s Recommendation to the Commission) The joint base common standards developed by the DOD for use by the joint bases in managing and reporting on installation support services are grouped into 48 functional areas of installation support. Table 3 shows the 48 functional areas. In addition to the contact named above, Laura Durland, Assistant Director; Jameal Addison; Grace Coleman; Chaneé Gaskin; Simon Hirschfeld; Gina Hoffman; Charles Perdue; Michael Silver; and Michael Willems made key contributions to this report. Military Base Realignments and Closures: Key Factors Contributing to BRAC 2005 Results. GAO-12-513T. Washington, D.C.: March 8, 2012. Excess Facilities: DOD Needs More Complete Information and a Strategy to Guide Its Future Disposal Efforts. GAO-11-814. Washington, D.C.: September 19, 2011. Military Base Realignments and Closures: Review of the Iowa and Milan Army Ammunition Plants. GAO-11-488R. Washington. D.C.: April 1, 2011. GAO’s 2011 High-Risk Series: An Update. GAO-11-394T. Washington, D.C.: February 17, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Defense Infrastructure: High-Level Federal Interagency Coordination Is Warranted to Address Transportation Needs beyond the Scope of the Defense Access Roads Program. GAO-11-165. Washington, D.C.: January 26, 2011. Military Base Realignments and Closures: DOD Is Taking Steps to Mitigate Challenges but Is Not Fully Reporting Some Additional Costs. GAO-10-725R. Washington, D.C.: July 21, 2010. Defense Infrastructure: Army Needs to Improve Its Facility Planning Systems to Better Support Installations Experiencing Significant Growth. GAO-10-602. Washington, D.C.: June 24, 2010. Military Base Realignments and Closures: Estimated Costs Have Increased While Savings Estimates Have Decreased Since Fiscal Year 2009. GAO-10-98R. Washington, D.C.: November 13, 2009. Military Base Realignments and Closures: Transportation Impact of Personnel Increases Will Be Significant, but Long-Term Costs Are Uncertain and Direct Federal Support Is Limited. GAO-09-750. Washington, D.C.: September 9, 2009. Military Base Realignments and Closures: DOD Needs to Update Savings Estimates and Continue to Address Challenges in Consolidating Supply- Related Functions at Depot Maintenance Locations. GAO-09-703. Washington, D.C.: July 9, 2009. Defense Infrastructure: DOD Needs to Periodically Review Support Standards and Costs at Joint Bases and Better Inform Congress of Facility Sustainment Funding Uses. GAO-09-336. Washington, D.C.: March 30, 2009. Military Base Realignments and Closures: DOD Faces Challenges in Implementing Recommendations on Time and Is Not Consistently Updating Savings Estimates. GAO-09-217. Washington, D.C.: January 30, 2009. Military Base Realignments and Closures: Army Is Developing Plans to Transfer Functions from Fort Monmouth, New Jersey, to Aberdeen Proving Ground, Maryland, but Challenges Remain. GAO-08-1010R. Washington, D.C.: August 13, 2008. Defense Infrastructure: High-Level Leadership Needed to Help Communities Address Challenges Caused by DOD-Related Growth. GAO-08-665. Washington, D.C.: June 17, 2008. Defense Infrastructure: DOD Funding for Infrastructure and Road Improvements Surrounding Growth Installations. GAO-08-602R. Washington, D.C.: April 1, 2008. Military Base Realignments and Closures: Higher Costs and Lower Savings Projected for Implementing Two Key Supply-Related BRAC Recommendations. GAO-08-315. Washington, D.C.: March 5, 2008. Defense Infrastructure: Realignment of Air Force Special Operations Command Units to Cannon Air Force Base, New Mexico. GAO-08-244R. Washington, D.C.: January 18, 2008. Military Base Realignments and Closures: Estimated Costs Have Increased and Estimated Savings Have Decreased. GAO-08-341T. Washington, D.C.: December 12, 2007. Military Base Realignments and Closures: Cost Estimates Have Increased and Are Likely to Continue to Evolve. GAO-08-159. Washington, D.C.: December 11, 2007. Military Base Realignments and Closures: Impact of Terminating, Relocating, or Outsourcing the Services of the Armed Forces Institute of Pathology. GAO-08-20. Washington, D.C.: November 9, 2007. Military Base Realignments and Closures: Transfer of Supply, Storage, and Distribution Functions from Military Services to Defense Logistics Agency. GAO-08-121R. Washington, D.C.: October 26, 2007. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth. GAO-07-1007. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Plan Needed to Monitor Challenges for Completing More Than 100 Armed Forces Reserve Centers. GAO-07-1040. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Observations Related to the 2005 Round. GAO-07-1203R. Washington, D.C.: September 6, 2007. Military Base Closures: Projected Savings from Fleet Readiness Centers Are Likely Overstated and Actions Needed to Track Actual Savings and Overcome Certain Challenges. GAO-07-304. Washington, D.C.: June 29, 2007. Military Base Closures: Management Strategy Needed to Mitigate Challenges and Improve Communication to Help Ensure Timely Implementation of Air National Guard Recommendations. GAO-07-641. Washington, D.C.: May 16, 2007. Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property. GAO-07-166. Washington, D.C.: January 30, 2007. Military Bases: Observations on DOD’s 2005 Base Realignment and Closure Selection Process and Recommendations. GAO-05-905. Washington, D.C.: July 18, 2005. Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments. GAO-05-785. Washington, D.C.: July 1, 2005. Military Base Closures: Observations on Prior and Current BRAC Rounds. GAO-05-614. Washington, D.C.: May 3, 2005. Military Base Closures: Updated Status of Prior Base Realignments and Closures. GAO-05-138. Washington, D.C.: January 13, 2005. Military Base Closures: Assessment of DOD’s 2004 Report on the Need for a Base Realignment and Closure Round. GAO-04-760. Washington, D.C.: May 17, 2004. Military Base Closures: Observations on Preparations for the Upcoming Base Realignment and Closure Round. GAO-04-558T. Washington, D.C.: March 25, 2004. Defense Infrastructure: Long-term Challenges in Managing the Military Construction Program. GAO-04-288. Washington, D.C.: February 24, 2004. Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. GAO-03-669. Washington, D.C.: July 2, 2003. Military Base Closures: Better Planning Needed for Future Reserve Enclaves. GAO-03-723. Washington, D.C.: June 27, 2003. Defense Infrastructure: Changes in Funding Priorities and Management Processes Needed to Improve Condition and Reduce Costs of Guard and Reserve Facilities. GAO-03-516. Washington, D.C.: May 15, 2003. Defense Infrastructure: Changes in Funding Priorities and Strategic Planning Needed to Improve the Condition of Military Facilities. GAO-03-274. Washington, D.C.: February 19, 2003. Defense Infrastructure: Greater Management Emphasis Needed to Increase the Services’ Use of Expanded Leasing Authority. GAO-02-475. Washington, D.C.: June 6, 2002. Military Base Closures: Progress in Completing Actions from Prior Realignments and Closures. GAO-02-433. Washington, D.C.: April 5, 2002. Military Base Closures: Overview of Economic Recovery, Property Transfer, and Environmental Cleanup. GAO-01-1054T. Washington, D.C.: August 28, 2001. Military Base Closures: DOD’s Updated Net Savings Estimate Remains Substantial. GAO-01-971. Washington, D.C.: July 31, 2001. Military Base Closures: Lack of Data Inhibits Cost-Effectiveness of Analyses of Privatization-in Place Initiatives. GAO/NSIAD-00-23. Washington, D.C.: December 20, 1999. Military Bases: Status of Prior Base Realignment and Closure Rounds. GAO/NSIAD-99-36. Washington, D.C.: December 11, 1998. Military Bases: Review of DOD’s 1998 Report on Base Realignment and Closure. GAO/NSIAD-99-17. Washington, D.C.: November 13, 1998. Navy Depot Maintenance: Privatizing Louisville Operations in Place Is Not Cost-Effective. GAO/NSIAD-97-52. Washington, D.C.: July 31, 1997. Military Bases: Lessons Learned From Prior Base Closure Rounds. GAO/NSIAD-97-151. Washington, D.C.: July 25, 1997. Military Base Closures: Reducing High Costs of Environmental Cleanup Requires Difficult Choices. GAO/NSIAD-96-172. Washington, D.C.: September 5, 1996. Military Bases: Closure and Realignment Savings Are Significant, but Not Easily Quantified. GAO/NSIAD-96-67. Washington, D.C.: April 8, 1996. Military Bases: Analysis of DOD’s 1995 Process and Recommendations for Closure and Realignment. GAO/NSIAD-95-133. Washington, D.C.: April 14, 1995. Military Bases: Analysis of DOD’s Recommendations and Selection Process for Closures and Realignments. GAO/NSIAD-93-173. Washington, D.C.: April 15, 1993. Military Bases: Observations on the Analyses Supporting Proposed Closures and Realignments. GAO/NSIAD-91-224. Washington, D.C.: May 15, 1991. Military Bases: An Analysis of the Commission’s Realignment and Closure Recommendations. GAO/NSIAD-90-42. Washington, D.C.: November 29, 1989.
GAO has designated DOD support infrastructure as an area of high risk and included one key related category--installation support--as an area for potential savings. In 2005, DOD recommended to the Base Realignment and Closure Commission combining 26 installations into 12 joint bases to generate efficiencies and cost savings and, in 2010, completed this consolidation. GAO assessed the extent to which (1) DOD developed and implemented a plan to achieve cost savings and efficiencies at the joint bases, (2) joint base common standards provide a common framework to manage and plan for installation support services, and (3) DOD has a process to consistently identify and address any implementation challenges. GAO reviewed DOD policies and guidance on joint basing, visited 3 joint bases and obtained answers to written questions from the other 9, interviewed OSD and military service officials, and analyzed performance data on joint base support services. The Office of the Secretary of Defense (OSD) has not developed or implemented a plan to guide joint bases in achieving cost savings and efficiencies. The Department of Defense (DOD) originally estimated saving $2.3 billion from joint basing over 20 years, but in the absence of a plan to drive savings, that estimate has fallen by almost 90 percent. OSD also does not yet have a fully developed method for accurately collecting information on costs, savings, and efficiencies achieved specifically from joint basing. GAO previously reported that organizational transformations such as merging components and transforming organizational cultures should be driven by top leadership, have implementation goals and a timeline to show progress, and include a communication strategy. Although the joint bases anecdotally reported achieving some savings and efficiencies, without an implementation plan to drive savings and a means to collect reliable information on the specific costs, estimated savings, and efficiencies from joint basing, DOD will not be able to facilitate achievement of the goals of cost savings and efficiencies, track the extent to which these goals have been achieved, or evaluate the continuation or expansion of joint basing. The joint bases implemented common standards for installation support services developed by OSD, and in fiscal years 2010 and 2011 reported meeting the standards more than 70 percent of the time. However, three factors limited the usefulness of the reported standards as a common tool for managing installation support services: the lack of clarity in some standards, unclear standards that were not reviewed and changed in a timely manner, and data collection and reporting on the standards that in some cases adhered to individual service standards rather than the common standard. DOD guidance states that the purpose of the joint base common standards framework was to provide a common language to serve as a basis for planning and management across the joint bases, and GAO previously reported that performance measures should be clear and follow standard procedures. Without a consistent interpretation and reported use of the standards, OSD and the joint bases will not have reliable or comparable data with which to assess their service support levels. OSD and the joint bases have various mechanisms in place to address challenges in achieving joint basing goals, such as a joint management oversight structure and annual OSD-joint base review meetings, but none of these routinely facilitates communication among the joint bases to identify solutions to common challenges. The reported challenges cover a wide range of issues, from different expectations among military services as to how base support services should be provided to incompatible information technology networks. However, the absence of a formal method to routinely share information on common challenges and possible solutions, or guidance on developing and providing training for new personnel on how joint bases provide installation support, means DOD is likely to miss opportunities to develop common solutions to common challenges. Federal internal control standards state that for an entity to control its operations, it must have relevant and timely communications, and information is needed throughout the agency to achieve objectives. In addition, without processes to identify common challenges and share information across the joint bases, DOD may miss opportunities for greater efficiencies and be unable to provide uniform policies across the joint bases. GAO recommends that DOD take six actions, such as developing a plan to achieve cost savings, prioritizing review and revision of unclear common standards, and developing a strategy to share solutions to common challenges. DOD partially agreed with five recommendations and did not concur with the recommendation to develop a plan to achieve cost savings, because it stated that such goals are not appropriate at this time. GAO continues to believe that the recommendations are valid as discussed further in the report.
Acquisition of products and services from contractors consumes about a quarter of discretionary spending governmentwide, with services making up roughly 60 percent. These services range from basic functions, such as landscaping and janitorial, to those that are more complex, like intelligence analysis, acquisition support, security services, and program office support. The acquisition of services differs from that of products in several key respects and can be particularly challenging in terms of defining requirements and assessing contractor performance. DOD is by far the largest federal purchaser of service contracts—ranging from housing to intelligence to security. Contractors can play an important part in helping agencies accomplish their missions. For example, agencies use service contracts to acquire special knowledge and skills not available in the government, obtain cost- effective services, or obtain temporary or intermittent services. The congressionally mandated Acquisition Advisory Panel has cited a number of developments that have led federal agencies to increase the use of contractors as service providers: limitations on the number of authorized full-time equivalent positions; unavailability of certain capabilities and expertise among federal employees; desire for operational flexibility; and the need for “surge” capacity. According to DOD and service officials, several factors have contributed to the department’s increased use of contractors for support services: (1) the increased requirements associated with the Global War on Terrorism and other contingencies; (2) policy to rely on the private sector for needed commercial services that are not inherently governmental in nature; and (3) DOD initiatives, such as competitive sourcing and utility privatization programs. The Office of Management and Budget (OMB), procurement law, and the Federal Acquisition Regulation (FAR) provide guidance on contracting for services. OMB Circular A-76 details a process for federal agencies to obtain commercially available services currently performed by government employees from the private sector when it is cost-effective to do so. The Circular reinforces that government personnel shall perform inherently governmental activities. This process does not apply to private sector performance of a new requirement, expanded activity, or continued performance of a commercial activity. As such, this process effectively applies to a small percentage of the government’s contracting activity. Most of the growth in service contracting has occurred outside of the A-76 process. The Federal Activities Inventory Reform (FAIR) Act of 1998 further requires agencies annually to determine and list which government-provided agency activities are not inherently governmental functions. Federal procurement regulation states that functions that are so intimately related to the public interest are considered inherently governmental and should only be performed by government personnel. These functions include those activities which require either the exercise of discretion in applying government authority or the use of value judgment in making decisions for the government, and should not be performed by contractors. The FAR and OMB also require agencies to provide greater scrutiny and management oversight when contracting for services that closely support the performance of inherently governmental functions. The closer contractor services come to supporting inherently governmental functions, the greater the risk of their influencing the government’s control over and accountability for decisions that may be based, in part, on contractor work. This may result in decisions that are not in the best interest of the government, and may increase vulnerability to waste, fraud, and abuse. Before I go into more detail on the issues surrounding the federal government’s and DOD’s reliance on contractors, I would like to touch on another subject of interest to the Subcommittee—DOD’s application of enhanced use leases. DOD’s longstanding leasing authority is codified at 10 U.S.C. 2667. The law provides general authority for the Secretary of a military department to enter into a lease upon such terms he considers will promote the national defense or be in the public interest. The Secretary of a military department is authorized to lease real property up to five years unless the Secretary determines that a lease for a longer period will promote the nation defense or be in the public interest. Over time, Congress has expanded DOD’s leasing authority several times to provide a lessee the first right to buy the property and provide for payment in cash or in kind by the lessee of consideration in an amount not less than the fair market value. Most recently, the National Defense Authorization Act for Fiscal Year 2008 amended 10 U.S.C. 2667 in several ways; for example, the authority to accept facilities operation support as in-kind consideration was eliminated, and a requirement that leases meeting certain criteria be competitively awarded was added. The services have leased real property on their bases for years as a means to reduce infrastructure and base operating costs. For example, the military services have leased space for banks, credit unions, ATMs, storage, schools, and agricultural grazing. As you know, Mr. Chairman, we are conducting a review of DOD’s land use planning activities, and will have more to say on this issue later. While there are benefits to using contractors to perform services for the government—such as increased flexibility in fulfilling immediate needs— GAO and others have raised concerns about the increasing reliance on contractors to perform agency missions. Our work shows that agencies face challenges with increased reliance on contractors to perform core agency missions, especially in contingency or emergency situations or in cases where sufficient government personnel are not available. As I have previously stated, prior to making the decisions to use contractors, agency officials should focus greater attention on which functions and activities should be contracted out and which should not. To guide this approach, agencies need to consider developing a total workforce strategy to meet current and future human capital needs, and address the extent of contractor use and the appropriate mix of contractor and civilian and military personnel. I have also noted that identifying and distinguishing the responsibilities of contractors and civilian and military personnel are critical to ensure contractor roles are appropriate. Finally, once contractors are in place, agencies must ensure appropriate oversight of contractors, including addressing risks, ethics concerns, and surveillance needs. In order to determine what functions and activities can be contracted out, the FAIR Act requires agencies annually to identify government-performed agency activities that are not inherently governmental functions. At GAO’s 2006 forum on federal acquisition challenges and opportunities, some participants noted that it might be more appropriate for agencies to develop guiding principles or values to determine which positions could be contracted out and which should be performed in-house. Forum participants further noted that many corporate organizations carefully deliberate up-front and at the highest management levels about what core functions they need to retain and what non-core functions they should buy, and the skill sets needed to procure non-core functions. DOD’s Panel on Contracting Integrity, in its 2007 report to Congress, noted that the practice of using contractors to support the government acquisition function merits further study because it gives rise to questions regarding the appropriate designation of government versus nongovernment functions. A November 2005 report by the Defense Acquisition University warned that the government must be careful when contracting for the acquisition support function to ensure that the government retains thorough control of policy and management decisions and that contracting for the acquisition support function does not inappropriately restrict agency management in its ability to develop and consider options. Additionally, our prior work has found that when federal agencies, including DOD, believe they do not have the in-house capability to design, develop, and manage complex acquisitions, they sometimes turn to a systems integrator to carry out these functions, creating an inherent risk of relying too much on contractors to make program decisions. For example, the Army’s Future Combat System program is managed by a lead systems integrator that assumes the responsibilities of developing requirements; selecting major system and subsystem contractors; and making trade-off decisions among costs, schedules, and capabilities. While this management approach has some advantages for DOD, we found that the extent of contractor responsibility makes DOD vulnerable to decisions being made by the contractor that are not in the government’s best interests. In September 2007, we reported that an increasing reliance on contractors to perform services for core government activities challenges the capacity of federal officials to supervise and evaluate the performance of these activities. I recently noted that this may be a concern in the intelligence community. Specifically, while direction and control of intelligence and counter-intelligence operations are listed as inherently governmental functions, the Director of National Intelligence reported in 2006 that the intelligence community finds itself in competition with its contractors for employees and is left with no choice but to use contractors for work that may be “borderline inherently governmental.” We have also found problems with contractors having too much control at other federal agencies. Unless the federal government pays the needed attention to the types of functions and activities performed by contractors, agencies run the risk of losing accountability and control over mission-related decisions. Along with determining the functions and activities to be contracted out, agencies face challenges in developing a total workforce strategy to address the extent of contractor use and the appropriate mix of contractor and civilian and military personnel. We have found that agencies need appropriate workforce planning strategies that include contractor as well as federal personnel and are linked to current and future human capital needs. These strategies should be linked to the knowledge, skills, and abilities needed by agencies and how the workforce will be deployed across the organization. Deployment includes the flexible use of the workforce, such as putting the right employees in the right roles according to their skills, and relying on staff drawn from various organizational components and functions using “just-in-time” or “virtual” teams to focus the right talent on specific tasks. As agencies develop their workforce strategies, they also need to consider the extent to which contractors should be used and the appropriate mix of contractor and federal personnel. Over the past several years, there has been increasing concern about the ability of agencies to ensure sufficient numbers of staff to perform some inherently governmental functions. The Department of Homeland Security’s human capital strategic plan notes the department has identified core mission-critical occupations and plans to reduce skill gaps in core and key competencies. However, it is unclear how this will be achieved and whether it will inform the department’s use of contractors for services that closely support inherently governmental functions. The Department of Homeland Security has agreed with the need to establish strategic-level guidance for determining the appropriate mix of government and contractor employees to meet mission needs. Agencies are challenged to define the roles and responsibilities of contractors vis-à-vis government employees. Defining the relationship between contractors and government employees is particularly important when contracting for professional and management support services since contractors often work closely with government employees to provide these services. This definition begins during the acquisition planning process when contract requirements are determined. We have recommended that agencies define contract requirements to clearly describe roles, responsibilities, and limitations of selected contractor services. Well-defined contract requirements can also help minimize the risk of contractors performing inherently governmental functions. Yet contracts, especially service contracts, often do not have definitive or realistic requirements at the outset. Because the nature of contracted services can vary widely, from building maintenance to intelligence, a tailored approach should be used in defining requirements to help ensure that risks associated with a requirement are fully considered before entering into a contract arrangement. In our recent review of the Department of Homeland Security’s service contracts, we found that some contracts included requirements that were broadly defined and lacked detail about activities that closely support inherently governmental functions. We found instances in which contractors provided services that were integral to the department’s mission or comparable to work performed by government employees, such as a contractor directly supporting the department’s efforts to hire federal employees, including signing offer letters. Our work on contractors in acquisition support functions has found that it is now commonplace for agencies to use contractors to perform activities historically performed by federal government contract specialists. Although these contractors are not authorized to obligate government funds, they provide acquisition support to contracting officers, the federal decision makers who have the authority to bind the government contractually. Contract specialists perform tasks that closely support inherently governmental functions, such as assisting in preparing statements of work; developing and managing acquisition plans; and preparing the documents the contracting officer signs, such as contracts, solicitations, and contract modifications. Therefore, it is important to clearly define the roles contractors play in supporting government personnel to ensure they do not perform inherently governmental functions. Our work has also identified a number of practices that are important to effectively managing and overseeing contractors once contractors are in place. These include assessing risks, minimizing potential ethics concerns, and ensuring quality through adequate surveillance. However, agencies face challenges in all these areas. Risk is innate when contractors closely support inherently governmental functions. Federal procurement policy requires enhanced oversight of services that closely support the performance of inherently governmental functions to ensure that government decisions reflect the independent judgment of agency officials and that agency officials retain control over and remain accountable for policy decisions that may be based on contractor work products. However, our work has shown that agency officials do not always assess these risks to government decision making. For example, in 2007 we reported that while Department of Homeland Security program officials generally acknowledged that their professional and management support services contracts closely supported the performance of inherently governmental functions, they did not assess the risk that government decisions may be influenced by rather than independent from contractor judgments. Further, most of the program officials and contracting officers we spoke with were not aware of the requirement to provide enhanced oversight, and did not believe that their professional and management support services needed enhanced oversight. Contractors are generally not subject to the same ethics rules as government employees even when they are co-located and work side-by- side with federal employees and perform similar functions. Federal ethics rules and standards have been put in place to help safeguard the integrity of the procurement process by mitigating the risk that employees entrusted to act in the best interest of the government will use their positions to influence the outcomes of contract awards for future gain. In addition, as we reported in 2005, contractors we met with indicated that DOD did not monitor their recruiting, hiring, and placement practices for current and former government employees. Consequently, DOD could not be assured that potential conflicts of interest would be identified. A lack of awareness among government employees of procurement integrity rules and conflict-of interest considerations creates additional risk. For example, in 2005 we reported that DOD did not know the content or frequency of ethics training and counseling or which employees received information on conflict-of-interest and procurement integrity. DOD also lacked knowledge on reported allegations of potential misconduct. In 2007, the Acquisition Advisory Panel recommended training for contractors and government employees, and the development of standard conflicts of interest clauses to include in solicitations and contracts. Quality assurance, especially regular surveillance and documentation of its results, is essential to determine whether goods or services provided by the contractor satisfy the contract requirements and to minimize risks that the government will pay the contractor more than the value of the goods and services. However, DOD officials have expressed concerns about the current state of the acquisition workforce to support surveillance and mentioned that surveillance remains an “other duty as assigned” and, consequently, is a low-priority task. We have also reported wide discrepancies in the rigor with which officials responsible for surveillance perform their duties, particularly in unstable environments. For example, in the aftermath of Hurricanes Katrina and Rita, the number of government personnel monitoring contracts was not always sufficient or adequately deployed to provide effective oversight. Unfortunately, attention to oversight has not always been evident in a number of instances, including during the Iraq reconstruction effort. We have reported that, particularly in the early phases of the Iraq reconstruction effort, several agencies including the Army lacked an adequate acquisition workforce in Iraq to oversee billions of dollars for which they were responsible. Further, Army personnel who were responsible for overseeing contractor performance of interrogation and other services were not adequately trained to properly exercise their responsibilities. Contractor employees were stationed in various locations around Iraq, with no assigned representative on site to monitor their work. An Army investigative report concluded that the number and training of officials assigned to monitor contractor performance at Abu Ghraib prison was not sufficient and put the Army at risk of being unaware of possible misconduct by contractor personnel. DOD’s increasing use of contractors to perform mission-support functions, including contractors who support forces deployed for military operations and contractors who perform maintenance and other logistic support for weapon systems, has highlighted several challenges that DOD faces in managing the increased role of this component of its total force. With regard to contractor support to deployed forces, DOD’s primary challenges have been to provide effective management and oversight. With respect to weapon system support, the challenges have been to resolve questions about how much depot maintenance and other logistic work needs to be performed in-house and about to what extent outsourcing for DOD logistics has been cost-effective. Since 1997, we have reported on DOD’s management and oversight challenges related to its use of contractor support to deployed forces. In December 2006, we issued a comprehensive review of DOD’s management and oversight of contractor support to deployed forces. We reported that despite making progress in some areas, DOD continued to face long- standing problems that hindered its management and oversight of contractors at deployed locations. Those problems included issues regarding visibility of contractors, numbers of contract oversight personnel, lessons learned, and training of military commanders and contract oversight personnel. More recently, we testified that DOD’s leadership needs to ensure implementation of and compliance with guidance on the use of contractors to support deployed forces. While DOD has long relied on contractors to support forces deployed for military operations, the large influx of contractors in support of operations in Iraq has exacerbated problems that DOD has had in managing and overseeing their activities. Significantly, the individual services and a wide array of DOD and non-DOD agencies can award contracts to support deployed forces. For example, although DOD estimated that as of the first quarter of fiscal year 2008, 163,590 contractors were supporting deployed forces in Iraq, no one person or organization made a decision to send 163,590 contractors to Iraq. Rather, decisions to send contractors to support forces in Iraq were made by numerous DOD activities both within and outside of Iraq. This decentralized process, combined with the scope and scale of contract support to deployed forces, contributes to the complexity of the problems we have identified in our past work on this topic. DOD has taken a number of actions to implement recommendations that we have made to improve its management of contractors. For example, in response to our 2003 recommendation that DOD develop comprehensive guidance to help the services manage contractors supporting deployed forces, the department issued the first comprehensive guidance dealing with contractors who support deployed forces in October 2005. Additionally, in October 2006, DOD established the office of the Assistant Deputy Under Secretary of Defense for Program Support to serve as the office with primary responsibility for contractor support issues. This office has led the effort to develop and implement a database which, when fully implemented, will allow by-name accountability of contractors who deploy with the force. This database implements recommendations we made in 2003 and 2006 to enhance the department’s visibility over contractors in locations such as Iraq and Afghanistan. Although DOD has taken these and other steps to address these issues, we recently testified that many of these issues remain a concern and additional actions are needed. As we have noted in previous reports and testimonies, DOD has not followed long-standing planning guidance, particularly by not adequately factoring the use and role of contractors into its planning. For example, we noted in 2003 that the operations plan for the war in Iraq contained only limited information on contractor support. However, Joint Publication 4-0, which provides doctrine and guidance for combatant commanders and their components regarding the planning and execution of logistic support of joint operations, stresses the importance of fully integrating into logistics plans and orders the logistics functions performed by contractors along with those performed by military personnel and government civilians. Additionally, we reported in 2004 that the Army did not follow its planning guidance when deciding to use the Army’s Logistics Civil Augmentation Program (LOGCAP) in Iraq. This guidance stresses the need to clearly identify requirements and develop a comprehensive statement of work early in the contingency planning process. Because this Army guidance was not followed, the plan to support the troops in Iraq was not comprehensive and was revised seven times in less than 1 year. Our 2003 report also concluded that essential contractor services had not been identified and backup planning was not being done. DOD policy requires DOD and its components to determine which contractor-provided services will be essential during crisis situations, develop and implement plans and procedures to provide a reasonable assurance of the continuation of essential services during crisis situations, and prepare a contingency plan for obtaining the essential service from an alternate source should the contractor be unable to provide it. Without such plans, there is no assurance that the personnel needed to provide the essential services would be available when needed. Moreover, as we reported in 2003 and 2006, senior leaders and military commanders need information about the contractor services they are relying on in order to incorporate contractor support into their planning. For example, senior military commanders in Iraq told us that when they began to develop a base consolidation plan for Iraq, they had no source to draw upon to determine how many contractors were on each installation. Limited visibility can also hinder the ability of commanders to make informed decisions about base operations support (e.g., food and housing) and force protection for all personnel on an installation. DOD has taken some action to address this problem. DOD is developing a database of contractors who deploy with U.S. forces. According to senior DOD officials familiar with this database, as of February 2008, the database had about 80,000 records. DOD is working with the State Department to include additional contractors, including private security contractors, in the database. In addition, Joint Contracting Command Iraq/Afghanistan has created the Theater Business Clearance process that reviews and approves all contracts for work in Iraq or Afghanistan. Joint Contracting Command Iraq/Afghanistan officials stated that this has helped military commanders know ahead of time when contractors are coming to work on their bases and ensure sufficient facilities are available for them. According to senior DOD officials, the department is also developing a cadre of contracting planners to ensure that contractor support is included in combatant commanders’ operational and contingency planning. As we noted in several of our previous reports, having the right people with the right skills to oversee contractor performance is crucial to ensuring that DOD receives the best value for the billions of dollars spent each year on contractor-provided services supporting forces deployed to Iraq and elsewhere. However, since 1992, we have designated DOD contract management as a high-risk area, in part due to concerns over the adequacy of the department’s acquisition workforce, including contract oversight personnel. While this is a DOD-wide problem, having too few contract oversight personnel presents unique difficulties at deployed locations given the more demanding contracting environment as compared to the United States. Having an inadequate number of contract oversight personnel has hindered DOD’s ability to effectively manage and oversee contractors supporting deployed forces and has had monetary impacts as well. For example, in 2004 we reported that DOD did not always have enough contract oversight personnel in place to manage and oversee its logistics support contracts such as LOGCAP and the Air Force Contract Augmentation Program (AFCAP). As a result, the Defense Contract Management Agency was unable to account for $2 million worth of tools that had been purchased using the AFCAP contract. During our 2006 review, several contract oversight personnel we met with told us DOD does not have adequate personnel at deployed locations. For example, a contracting officer’s representative for a linguistic support contract told us that although he had a battalion’s worth of people with a battalion’s worth of problems, he lacked the equivalent of a battalion’s staff to deal with those problems. Similarly, an official with the LOGCAP Program Office told us that, had adequate staffing been in place early, the Army could have realized substantial savings through more effective reviews of the increasing volume of LOGCAP requirements. More recently, we reported that the Army did not have adequate staff to oversee an equipment maintenance contract in Kuwait. According to Army officials, vacant and reduced inspector and analyst positions meant that surveillance was not being performed sufficiently in some areas and the Army was less able to perform data analyses, identify trends in contractor performance, and improve quality processes. In addition, the 2007 report of the Commission on Army Acquisition and Program Management in Expeditionary Operations stated that the Army lacks the leadership and military and civilian personnel to provide sufficient contracting support to either expeditionary or peacetime missions. As a result, the commission found that the vital task of post-award contract management is rarely being done. As we noted in our 2006 report, without adequate contract oversight personnel in place to monitor its many contracts in deployed locations such as Iraq, DOD may not be able to obtain reasonable assurance that contractors are meeting their contract requirements efficiently and effectively. DOD has taken some actions to address this problem. In February 2007, the Deputy Assistant Secretary of the Army (Policy and Procurement) issued guidance that required, among other things, contracting officers to appoint certified contracting officer’s representatives in writing before contract performance begins, identify properly trained contracting officer’s representatives for active service contracts, and ensure that a government quality assurance surveillance plan is prepared and implemented for service contracts exceeding $2,500. Joint Contracting Command Iraq/Afghanistan officials stated they are in the process of adding 39 personnel to provide additional contractor oversight. Similarly, the Defense Contract Management Agency has deployed an additional 100 people and plans to deploy approximately 150 more people to provide contract oversight and management to both ongoing and future contracts in Iraq. The agency is providing oversight for DOD’s private security contracts as well as other theaterwide contracts. Additionally, senior DOD officials stated that the department has created a task force to address the recommendations of the October 2007 report by the Commission on Army Acquisition and Program Management in Expeditionary Operations. Although DOD and its components have used contractors to support deployed forces in several prior military operations, DOD does not systematically ensure that institutional knowledge on the use of contractors to support deployed forces, including lessons learned and best practices, is shared with military personnel at deployed locations. We previously reported that DOD could benefit from systematically collecting and sharing its institutional knowledge to help ensure that it is factored into planning, work processes, and other activities. Although DOD has policy requiring the collection and distribution of lessons learned to the maximum extent possible, we found in our previous work that no procedures were in place to ensure that lessons learned are collected and shared. Moreover, although the Army regulation which establishes policies, responsibilities, and procedures for the implementation of the LOGCAP program makes customers that receive services under the LOGCAP contract responsible for collecting lessons learned, we have repeatedly found that DOD is not systematically collecting and sharing lessons learned on the use of contractors to support to deployed forces. Despite years of experience using contractors to support forces deployed to the Balkans, Southwest Asia, Iraq, and Afghanistan, DOD has made few efforts to leverage this institutional knowledge. As a result, many of the problems we identified in earlier operations have recurred in current operations. During the course of our 2006 work, we found no organization within DOD or its components responsible for developing procedures to capture lessons learned on the use of contractor support at deployed locations. We noted that when lessons learned are not collected and shared, DOD and its components run the risk of repeating past mistakes and being unable to build on the efficiencies and effectiveness others have developed during past operations that involved contractor support. We also found a failure to share best practices and lessons learned between units as one redeploys and the other deploys to replace it. As a result, new units essentially start at ground zero, having to resolve a number of difficulties until they understand contractor roles and responsibilities. DOD does not routinely incorporate information about contractor support for deployed forces in its pre-deployment training of military personnel, despite the long-standing recognition of the need to provide such information. We have discussed the need for better pre-deployment training of military commanders and contract oversight personnel since the mid-1990s and have made several recommendations aimed at improving such training. Moreover, according to DOD policy, personnel should receive timely and effective training to ensure they have the knowledge and other tools necessary to accomplish their missions. Nevertheless, we continue to find little evidence that improvements have been made in terms of how DOD and its components train military commanders and contract oversight personnel on the use of contractors to support deployed forces prior to their deployment. Without properly trained personnel, DOD will continue to face risks of fraud, waste, and abuse. Limited or no pre-deployment training on the use of contractor support can cause a variety of problems for military commanders in a deployed location. As we reported in 2006, with limited or no pre-deployment training on the extent of contractor support to deployed forces, military commanders may not be able to adequately plan for the use of those contractors. Similarly, in its 2007 report, the Commission on Army Acquisition and Program Management in Expeditionary Operations concluded that the Army needs to educate and train commanders on the important operational role of contracting. Several military commanders we met with in 2006 said their pre-deployment training did not provide them with sufficient information on the extent of contractor support that they would be relying on in Iraq and were therefore surprised by the substantial number of personnel they had to allocate to provide on-base escorts, convoy security, and other force protection support to contractors. In addition, limited or no pre-deployment training for military commanders can result in confusion over their roles and responsibilities in managing and overseeing contractors. For example, we found some instances where a lack of training raised concerns over the potential for military commanders to direct contractors to perform work outside the scope of the contract, something commanders lack the authority to do. This can cause the government to incur additional charges because modifications would need to be made to the contract. We also found that contract oversight personnel such as contracting officer’s representatives received little or no pre-deployment training on their roles and responsibilities in monitoring contractor performance. Many of the contracting officer’s representatives we spoke with in 2003 and 2006 said that training before they assumed these positions would have better prepared them to effectively oversee contractor performance. In most cases, deploying individuals were not informed that they would be performing contracting officer’s representative duties until after they had deployed, which hindered the ability of those individuals to effectively manage and oversee contractors. For example, officials from a corps support group in Iraq told us that until they were able to get a properly trained contracting officer’s representative in place, they experienced numerous problems regarding the quality of food service provided by LOGCAP. In addition, the 2007 report of the Commission on Army Acquisition and Program Management in Expeditionary Operations discussed the need to train contracting officer’s representatives and warned that the lack of training could lead to fraud, waste, and abuse. DOD has taken some steps to address this problem. In DOD’s response to our 2006 report, the Director of Defense Procurement and Acquisition Policy stated that the Army is making changes to its logistics training programs that would incorporate contracting officer’s representatives training into its basic and advanced training for its ordnance, transportation, and quartermaster corps. In addition, the Defense Acquisition University has updated its contingency contracting course to include a lesson on contractors accompanying the force. Further, the Defense Contract Management Agency is adding personnel to assist in the training and managing of contracting officer’s representatives. DOD has moved over the years toward greater use of the private sector to perform maintenance and other logistics support for weapon systems. Factors influencing this increased reliance on contractors include changes in DOD’s guidance and plans that emphasized the privatization of logistics functions, a lack of technical data and modernized facilities needed to perform maintenance on new systems, and reductions in maintenance workers at government-owned depots. The move toward greater reliance on contractors has raised questions regarding how much depot maintenance and other logistics work needs to be performed in-house and about the cost-effectiveness of outsourcing DOD logistics. DOD has increasingly relied on contractors for maintenance and other logistic support of weapon systems. For example, funding for private sector contractors to perform depot maintenance increased in then-year dollars from about $4.0 billion in fiscal year 1987 to about $13.8 billion in fiscal year 2007, or 246 percent. In contrast, during this same time period, the amount of funding for depot maintenance performed at government (public) depots increased from about $8.7 billion to about $16.1 billion, or 85 percent. This trend toward greater reliance on the private sector for depot maintenance was most evident during the period from fiscal years 1987 to 2000, when the amount of funding for public depot maintenance largely stayed flat and private sector funding increased by 89 percent. Since 2001, military operations in support of the Global War on Terrorism have resulted in large funding increases for maintenance performed by both public and private sector activities. One potential future limitation to continued contracting out of depot maintenance activities is the statutory limit on the amount of funding for depot maintenance work that can be performed by private sector contractors. Under 10 U.S.C. 2466(a), not more than 50 percent of funds made available in a fiscal year to a military department or defense agency for depot-level maintenance and repair may be used to contract for the performance by non-government personnel of such workload for the military departments and defense agencies. As the contractors’ share has increased over time, managing within this limitation has become more challenging—particularly for the Air Force and, to a lesser extent, the Army. Another potential limitation to contracting out is a requirement that DOD maintain a core logistics capability within government facilities. However, as I will discuss, our work has revealed problems in DOD’s implementation of this requirement. DOD also has experienced significant growth in the overall use of contractors for long-term logistics support of weapon systems. While the department does not collect and aggregate cost data specifically on these support arrangements, available data illustrate this growth. For example, Air Force data show an increase in funding for these support arrangements from $910 million in fiscal year 1996 to a projected $4.1 billion in fiscal year 2013. Many DOD acquisition program offices have been adopting long-term support strategies for sustaining new and modified systems that rely on contractors. Our ongoing review of core logistics capability indicates that performance-based logistics or some other type of partnership is a frequently used weapon system sustainment approach. The move toward increased use of contractors to perform maintenance and other logistics support for weapon systems has been influenced by multiple factors. A significant factor has been the shift in DOD’s guidance and plans that placed greater emphasis on privatizing logistics functions. In 1996, for example, DOD issued a report, Plan for Increasing Depot Maintenance Privatization and Outsourcing, which provided a framework for substantially increasing reliance on the private sector for depot maintenance. In addition, both the 1995 report by the Commission on Roles and Missions and a 1996 report by a Defense Science Board task force recommended that DOD outsource almost all depot maintenance and other logistics activities. Both study teams assumed large cost savings would result from increased privatization. Today, DOD guidance provides that performance-based logistics is now DOD’s preferred approach for providing long-term total system support for weapon systems. DOD describes performance-based logistics as the process of (1) identifying a level of performance required by the warfighter and (2) negotiating a performance-based arrangement to provide long- term total system support for a weapon system at a fixed level of annual funding. Another factor in the move toward greater reliance on contractors has been the lack of technical data and other elements of support, such as modernized facilities, required to establish a maintenance capability for new systems. Technical data for weapon systems include drawings, specifications, standards, and other details necessary to ensure the adequacy of item performance, as well as manuals that contain instructions for installation, operation, maintenance, and other actions needed to support weapon systems. As a result of not having acquired technical data rights from the equipment manufacturers, the military services in some instances have had difficulty establishing a maintenance capability at government depots. For example, the Air Force identified a need to develop a core capability to perform maintenance on the C-17 aircraft at government depots, but lacked the requisite technical data rights. Consequently, the Air Force has sought to form partnerships with C-17 subvendors to develop a depot maintenance capability, but these efforts have had mixed results. Based on our ongoing review of DOD core capability, we found that the Air Force continues to have challenges establishing core capability for C-17 commodities because of technical data issues. A third factor influencing DOD’s increasing reliance on contractor support has been reductions in government depot maintenance personnel available to perform the work. Personnel downsizing has greatly reduced the number of depot maintenance workers and has limited the amount of work that could be performed in the depots. Since 1987 the number of depot-level maintenance personnel was reduced by 56 percent from a high of 163,000 in 1987 to about 72,000 in 2002, after which the depots began to see some personnel increases to support the Global War on Terrorism. In comparison, in the 13 years between 1989 and 2002, DOD’s total civilian workforce had a 38 percent reduction. While some downsizing was essential, given reductions in depot maintenance workloads over the same period, mandated reductions in the number of personnel were taken even though the depots may have had funded workload to support an increased number of personnel. For example, in a review of Army depot personnel reductions in 1998, we found that efforts to implement the reductions at the Corpus Christi Army Depot were poorly managed and more direct labor employees were reduced than intended—adversely affecting the depot’s productivity. We found that while Army regulations on manpower management provide that staffing levels are to be based on the workloads performed, the Army’s reduced staffing plan was developed in response to affordability concerns and a desire to lower the depot’s rates and did not support the depot’s funded workload requirement. Because DOD has not clearly and comprehensively identified what depot maintenance and other logistics activities the department should be performing itself, it is unclear how much of the work that has been contracted out may be work that should be done in-house by government personnel. Additionally, DOD has not identified core logistics capability requirements for other logistics functions, such as supply chain management and engineering. With regard to depot maintenance, we previously reported that DOD lacks assurance that core logistics capabilities were being maintained as needed to ensure timely and effective response to national defense emergencies and contingencies, as required by 10 U.S.C. 2464, noting that several factors precluded this assurance. First, DOD’s existing policy, which establishes a process for identifying core maintenance capability, was not comprehensive in that it did not provide for a forward look at new weapon systems and associated future maintenance capability requirements. Second, the various procedures and practices being used by the services to implement the existing policy were also affecting the establishment of core capability. For example, the Air Force reduced its core requirement as a result of its consideration of maintenance work performed in the private sector, even though core work is supposed to be performed in military facilities and by government personnel. In addition, we have noted that DOD has had other limitations, including a lack of technical data rights and a lack of sufficient investment in facilities, equipment, and human capital to ensure the long-term viability of the military depots. To improve its process for identifying core maintenance capability requirements, in January 2007 DOD issued an instruction on how to identify required core capabilities for depot maintenance, which generally mirrored previous guidance. Also, in March 2007 DOD issued its depot maintenance strategy, which delineated the actions DOD is undertaking to identify and sustain core maintenance capability. We have an ongoing engagement to assess the effectiveness of the current policy and procedures as well as the services’ implementation. To address issues inhibiting the establishment of core capability, Congress has taken recent actions to address problems with technical data and depot facilities. We previously recommended that DOD improve its acquisition policies for assessing technical data needs to support weapon systems. The John Warner National Defense Authorization Act for Fiscal Year 2007 (2007 Defense Authorization Act) mandated that DOD require program managers for major weapon systems to assess long-term technical data needs for weapon systems and to establish corresponding acquisition strategies that provide for technical data rights needed to sustain such systems over their life cycle. DOD subsequently issued a new policy in July 2007 to implement this requirement. Potential benefits from this action are long term because of the time frames required for developing and acquiring weapon systems, and it is uncertain what actions may have been taken by program offices as a result of this policy change or the extent in which any actions taken could improve the availability of required data in the future. To address inadequacies in the military’s investments in its maintenance depots, the 2007 Defense Authorization Act required military departments to invest each fiscal year in the capital budgets of certain depots a total amount equal to at least 6 percent of the average total combined workload funded at all of the depots over the preceding 3 fiscal years. As a part of an ongoing engagement, we are reviewing the military departments’ implementation of this mandate. We have also reported that DOD has not established policies or processes for determining core requirements for non-maintenance logistics capabilities for activities such as supply support, engineering, and transportation. Without identifying those core logistics activities that need to be retained in-house, the services may not be retaining critical capabilities as they proceed with contracting initiatives. For example, if DOD implements performance-based logistics—its preferred weapon system support arrangement—at the platform level, this can result in contracting out the program integration function, a core process which the private sector firms we interviewed during a 2004 review considered integral to their successful business operations. Another potential adverse effect of awarding a performance-based contract at the platform level is the loss of management control and expertise over the system that private sector companies told us were essential to retain in-house. In an earlier engagement, Army, Navy, and Air Force operational command officials told us that among their concerns with various types of long-term contractor logistics support arrangements were (1) retaining the ability to maintain and develop critical technical skills and knowledge, (2) limiting operational authority, and (3) reducing the program office’s ability to perform essential management functions. Thus, without well-defined policy and procedures for identifying core requirements for critical logistics areas, the department may not be in a position to ensure that it will have the needed capabilities for the logistics system to support essential military weapons and equipment in an emergency. Although DOD justified its logistics outsourcing initiatives based on the assumption that there would be significant cost savings, it is uncertain to what extent cost savings have occurred or will occur. Overall funding for depot maintenance costs and other logistics support costs are increasing significantly, both for work that is performed in military depots and by contractors. However, sufficient data are not available to determine whether increased contracting has caused DOD’s costs to be higher than they would have been had the contracted activities been performed by DOD civilians. As noted earlier, assumptions about savings were a key part of DOD’s shift in policy toward the performance of defense logistics by the private sector. While the 1995 Commission on Roles and Missions projected savings of 20 percent from outsourcing, we questioned this group’s savings assumptions, noting that its data did not support its depot privatization savings assumptions. These assumptions were based on reported savings from public-private competitions for commercial activities under Office of Management and Budget Circular A-76. The commercial activities were generally dissimilar to depot maintenance activities because they involved relatively simple, routine, and repetitive tasks that did not generally require large capital investments or highly skilled and trained personnel. Public activities were allowed to compete for these workloads and won about half the competitions. Additionally, many private sector firms generally made offers for this work due to the highly competitive nature of the private sector market, and estimated savings were generally greater in situations where there were larger numbers of private sector offerors. In contrast, most depot maintenance work is awarded without competition to the original equipment manufacturer. We noted that in the absence of a highly competitive market, privatizing unique, highly diverse, and complex depot maintenance workloads that require large capital investments, extensive technical data, and highly skilled and trained personnel would not likely achieve expected savings and could increase the costs of depot maintenance operations. We also questioned the Defense Science Board’s projections of $30 billion in annual savings from privatizing almost all logistics support activities. We have also reported that whereas DOD expected to achieve large savings from its contracting out of more of its depot-level maintenance work, depot maintenance contracting represented a challenge to relying on commercial market forces. Whereas DOD was attempting to rely on competitive market forces, about 91 percent of the depot maintenance contracts we reviewed were awarded noncompetitively. We also noted that difficulties in precisely defining requirements also affected DOD’s efforts to rely on competitive market forces. Further, we cautioned that DOD would need to increase the use of competitively awarded depot maintenance contracts and to address how best to assure product quality and reasonable prices when competitive market forces were not present. We have also raised questions about cost savings from DOD’s increased use of performance-based logistics. Although DOD guidance recommends that program offices perform a business case analysis before adopting a performance based logistics approach to support weapon system, our reviews of the implementation of this approach show these analyses are not often done and DOD program offices could not demonstrate that they had achieved cost savings. Of the 15 programs we reviewed, 11 program offices had developed a business case analysis—prior to entering into a performance-based logistics arrangement—which projected achieving significant cost savings. Only one of these programs offices had updated its business case analysis with actual cost data as recommended by DOD guidance. The one program office that did update its business case analysis determined that the contract did not result in the expected cost savings and subsequently restructured the program. Program office officials acknowledged limitations in their own information systems in providing reliable data to closely monitor contractor costs. While existing systems are capable of collecting some cost information, they are not capturing sufficiently detailed cost information for monitoring the performance-based logistics contracts. Our 2005 report on DOD’s implementation of performance-based logistics included a recommendation on the validation of business case decisions to demonstrate whether they are resulting in reduced costs and increased performance. Also, given the stated limitations in cost information, we recommended that program offices be required to improve their monitoring of performance-based logistics arrangements by verifying the reliability of contractor cost and performance data. Although DOD concurred with our recommendations, we are currently evaluating the corrective actions taken. In addition, DOD currently does not require detailed reporting of contractor logistics support costs, including for performance based arrangements. In closing, I believe that we must engage in a fundamental reexamination of when and under what circumstances we should use contractors versus civil servants or military personnel. This is a major and growing concern that needs immediate attention. In general, I believe there is a need to focus greater attention on what type of functions and activities should be contracted out and which ones should not. Inherently governmental functions are required to be performed by government personnel, not private contractors. Government officials, in making decisions about whether to use contractors for services closely supporting inherently governmental functions, should assess risk and consider the need for enhanced management and oversight controls. Once the decision to contract has been made, we must address challenges we have observed in ensuring proper oversight of these arrangements—especially considering the evolving and enlarging role of contractors in federal acquisitions. These concerns, identified in our work at several federal agencies including DOD, are more complex to address and may take on greater significance in contingency or military operations. As we have witnessed with contractors in Iraq, a specific decision made by a contractor can impact U.S. strategic and operational objectives in ways that were not considered in making the initial contracting decision. To address these concerns with regard to contractor support to deployed forces, we believe that in the immediate future, DOD’s leadership needs to ensure implementation of and compliance with relevant existing guidance. In the longer term, we believe a broader examination of the use and role of contractors to support deployed forces is in order. As I stated in April 2007, it may be appropriate to ask if DOD has become too reliant on contractors to provide essential services. What is needed is a comprehensive, forward-looking, and integrated review of contractor support to deployed forces that provides the proper balance between contractor support and the core capabilities of military forces over the next several years. In a November 2007 briefing on DOD transformation, I called on DOD to employ a total force management approach to planning and execution (e.g. military, civilian, and contractors). Many of the problems we have identified regarding the management and oversight of contractor support to deployed forces stem from DOD’s reluctance to plan for contractors as an integral part of the total force. One way DOD could begin to address this issue is by incorporating the use and role of contractors into its readiness reporting. DOD regularly reports on the readiness status, capabilities assessments, and other reviews of the status and capabilities of its forces. Given the reality that DOD is dependent on contractors for much of its support in deployed locations, the department should include information on the specific missions contractors will be asked to perform, the operational impacts associated with the use of contractors, and the personnel necessary to effectively oversee and manage those contractors. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information regarding this testimony, please contact William M. Solis at (202) 512-8365 or (solisw@gao.gov) or John Hutton at (202) 512-4841 or (huttonj@gao.gov). Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this product. Staff making key contributions to this statement were Julia Denman, Tom Gosling, Amelia Shachoy, Assistant Directors; Carleen Bennett, Laura Holliday, Randy Neice, Janine Prybyla, James Reynolds, Bill Russell, Karen Sloan, and Karen Thornton. 1. Service budgets are allocated largely according to top line historical percentages rather than Defense-wide strategic assessments and current and likely resource limitations. 2. Capabilities and requirements are based primarily on individual service wants versus collective Defense needs (i.e., based on current and expected future threats) that are both affordable and sustainable over time. 3. Defense consistently overpromises and underdelivers in connection with major weapons, information, and other systems (i.e., capabilities, costs, quantities, and schedule). 4. Defense often employs a “plug and pray approach” when costs escalate (i.e., divide total funding dollars by cost per copy, plug in the number that can be purchased, then pray that Congress will provide more funding to buy more quantities). 5. Congress sometimes forces the department to buy items (e.g., weapon systems) and provide services (e.g., additional health care for non- active beneficiaries, such as active duty members’ dependents and military retirees and their dependents) that the department does not want and we cannot afford. 6. DOD tries to develop high-risk technologies after programs start instead of setting up funding, organizations, and processes to conduct high-risk technology development activities in low-cost environments, (i.e., technology development is not separated from product development). Program decisions to move into design and production are made without adequate standards or knowledge. 7. Program requirements are often set at unrealistic levels, then changed frequently as recognition sets in that they cannot be achieved. As a result, too much time passes, threats may change, or members of the user and acquisition communities may simply change their mind. The resulting program instability causes cost escalation, schedule delays, smaller quantities and reduced contractor accountability. 8. Contracts, especially service contracts, often do not have definitive or realistic requirements at the outset in order to control costs and facilitate accountability. 9. Contracts typically do not accurately reflect the complexity of projects or appropriately allocate risk between the contractors and the taxpayers (e.g., cost plus, cancellation charges). 10. Key program staff rotate too frequently, thus promoting myopia and reducing accountability (i.e., tours based on time versus key milestones). Additionally, the revolving door between industry and the department presents potential conflicts of interest. 11. The acquisition workforce faces serious challenges (e.g., size, skills, knowledge, and succession planning). 12. Incentive and award fees are often paid based on contractor attitudes and efforts versus positive results (i.e., cost, quality, and schedule). 13. Inadequate oversight is being conducted by both the department and Congress, which results in little to no accountability for recurring and systemic problems. 14. Some individual program and funding decisions made within the department and by Congress serve to undercut sound policies. 15. Lack of a professional, term-based Chief Management Officer at the department serves to slow progress on defense transformation and reduce the chance of success in the acquisitions/contracting and other key business areas. Defense Contracting: Additional Personal Conflict of Interest Safeguards Needed for Certain DOD Contractor Employees. GAO-08-169. Washington, D.C.: March. 7, 2008. Intelligence Reform: GAO Can Assist the Congress and the Intelligence Community on Management Reform Initiatives. GAO-08-413T. Washington, D.C.: February 29, 2008. Federal Acquisition: Oversight Plan Needed to Help Implement Acquisition Advisory Panel Recommendations. GAO-08-160. December 20, 2007. Department of Homeland Security: Improved Assessment and Oversight Needed to Manage Risk of Contracting for Selected Services. GAO-07-990. Washington, D.C.: September 17, 2007. Federal Acquisitions and Contracting: Systemic Challenges Need Attention. GAO-07-1098T. Washington, D.C.: July 17, 2007. Defense Acquisitions: Role of Lead Systems Integrator on Future Combat Systems Program Poses Oversight Challenges. GAO-07-380. Washington, D.C.: June 6, 2007. Highlights of a GAO Forum: Federal Acquisition Challenges and Opportunities in the 21st Century. GAO-07-45SP. Washington, D.C.: Oct. 6, 2006. Contract Management: DOD Vulnerabilities to Contracting Fraud, Waste, and Abuse. GAO-06-838R. Washington, D.C.: July 7, 2006. Agency Management of Contractors Responding to Hurricane Katrina and Rita, GAO-06-461R. Washington, D.C.: March 16, 2006. Defense Ethics Program: Opportunities Exist to Strengthen Safeguards for Procurement Integrity. GAO-05-341. Washington, D.C.: April 29, 2005. Interagency Contracting: Problems with DOD’s and Interior’s Orders to Support Military Operations. GAO-05-201. Washington, D.C.: April 25, 2005. Rebuilding Iraq: Fiscal Year 2003 Contract Award Procedures and Management Challenges. GAO-04-605. Washington, D.C.: June 1, 2004. Human Capital: Key Principles for Effective Strategic Workforce Planning. GAO-04-39. Washington, D.C.: December 11, 2003. Human Capital: A Self-Assessment Checklist for Agency Leaders. GAO/GGD-99-179. Washington, D.C.: September 1, 1999. Government Contractors: Are Service Contractors Performing Inherently Governmental Functions? GAO/GGD -92-11. Washington, D.C.: November 18, 1991. Energy Management: Using DOE Employees Can Reduce Costs for Some Support Services. GAO/RCED 91-186. Washington, D.C.: August 16, 1991. Civil Servants and Contractor Employees: Who Should Do What for the Federal Government? FPCD-81-43. Washington, D.C.: June 19, 1981. Military Operations: Implementation of Existing Guidance and Other Actions Needed to Improve DOD’s Oversight and Management of Contractors in Future Operations. GAO-08-436T. Washington, D.C.: January 24, 2008. Defense Logistics: The Army Needs to Implement an Effective Management and Oversight Plan for the Equipment Maintenance Contract in Kuwait. GAO-08-316R. Washington, D.C.: January 23, 2008. Defense Acquisitions: Improved Management and Oversight Needed to Better Control DOD's Acquisition of Services. GAO-07-832T. Washington, D.C.: May 10, 2007. Military Operations: High-Level DOD Action Needed to Address Long- standing Problems with Management and Oversight of Contractors Supporting Deployed Forces. GAO-07-145. Washington, D.C.: December 18, 2006. Rebuilding Iraq: Continued Progress Requires Overcoming Contract Management Challenges. GAO-06-1130T. Washington, D.C.: September 28, 2006. Military Operations: Background Screenings of Contractor Employees Supporting Deployed Forces May Lack Critical Information, but U.S. Forces Take Steps to Mitigate the Risks Contractors May Pose. GAO-06- 999R. Washington, D.C.: September 22, 2006. Rebuilding Iraq: Actions Still Needed to Improve the Use of Private Security Providers. GAO-06-865T. Washington, D.C.: June 13, 2006. Rebuilding Iraq: Actions Needed to Improve Use of Private Security Providers. GAO-05-737. Washington, D.C.: July 28, 2005. Interagency Contracting: Problems with DOD’s and Interior’s Orders to Support Military Operations. GAO-05-201. Washington, D.C.: April 29, 2005. Defense Logistics: High-Level DOD Coordination Is Needed to Further Improve the Management of the Army’s LOGCAP Contract. GAO-05-328. Washington, D.C.: March 21, 2005. Contract Management: Opportunities to Improve Surveillance on Department of Defense Service Contracts. GAO-05-274. Washington, D.C.: March 17, 2005. Military Operations: DOD’s Extensive Use of Logistics Support Contracts Requires Strengthened Oversight. GAO-04-854. Washington, D.C.: July 19, 2004. Military Operations: Contractors Provide Vital Services to Deployed Forces but Are not Adequately Addressed in DOD Plans. GAO-03-695. Washington, D.C.: June 24, 2003. Contingency Operations: Army Should Do More to Control Contract Cost in the Balkans. GAO/NSIAD-00-225. Washington, D.C.: September 29, 2000. Contingency Operations: Opportunities to Improve the Logistics Civil Augmentation Program. GAO/NSIAD-97-63. Washington, D.C.: February 11, 1997. Defense Management: DOD Needs to Demonstrate That Performance- Based Logistics Contracts Are Achieving Expected Benefits. GAO-05-966. Washington, D.C.: September 9, 2005. Defense Management: Opportunities to Enhance the Implementation of Performance-Based Logistics. GAO-04-715. Washington, D.C.: August 16, 2004. Depot Maintenance: Key Unresolved Issues Affect the Army Depot System’s Viability. GAO-03-682. Washington, D.C.: July 7, 2003. Depot Maintenance: Public-Private Partnerships Have Increased, but Long-Term Growth and Results Are Uncertain. GAO-03-423. Washington, D.C: April 10, 2003. Defense Logistics: Opportunities to Improve the Army’s and the Navy’s Decision-making Process for Weapons Systems Support. GAO-02-306. Washington, D.C.: February 28, 2002. Defense Logistics: Actions Needed to Overcome Capability Gaps in the Public Depot System. GAO-02-105. Washington D.C.: October 12, 2001 Defense Logistics: Air Force Lacks Data to Assess Contractor Logistics Support Approaches. GAO-01-618. Washington, D.C.: September 7, 2001. Army Industrial Facilities: Workforce Requirements and Related Issues Affecting Depots and Arsenals. GAO/NSIAD-99-31. Washington. D.C.: November 30, 1998. Defense Depot Maintenance: DOD Shifting More Workload for New Weapon Systems to the Private Sector. GAO/NSIAD-98-8. Washington, D.C.: March 31, 1998. Defense Depot Maintenance: Commission on Roles and Missions’ Privatization Assumptions Are Questionable. GAO/NSIAD-06-161. Washington, D.C.: July 15, 1996. Defense Depot Maintenance: DOD’s Policy Report Leaves Future Role of Depot System Uncertain. GAO/NSIAD-96-165. Washington, D.C.: May 21, 1996. Defense Depot Maintenance: Privatization and the Debate Over the Public-Private Mix. GAO/T-NSIAD-96-146. Washington, D.C.: April 16, 1996. Depot Maintenance: Issues in Allocating Workload Between the Public and Private Sectors. GAO/T-NSIAD-94-161. Washington, D.C.: April 12, 1994. Depot Maintenance: Issues in Management and Restructuring to Support a Downsized Military. GAO/T-NSIAD-93-13. Washington, D.C.: May 6, 1993. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government, including the Department of Defense (DOD), is increasingly relying on contractors to carry out its missions. Governmentwide spending on contractor services has more than doubled in the last 10 years. DOD has used contractors extensively to support troops deployed abroad. The department recently estimated the number of contractors in Iraq and Afghanistan to be about 196,000. DOD also relies heavily on contractors for various aspects of weapon system logistics support. While contractors, when properly used, can play an important role in helping agencies accomplish their missions, GAO has identified long-standing problems regarding the appropriate role and management of contractors, particularly at DOD. This testimony highlights the challenges federal agencies face related to the increased reliance on contractors and the specific challenges DOD has had in managing its increased reliance on contractors who support deployed troops and who provide logistics support for weapons systems. This testimony also highlights some of the recommendations GAO has made over the past several years to improve DOD's management and oversight of contractors, as well as DOD's actions in response to those recommendations. While there are benefits to using contractors to perform services for the government--such as increased flexibility in fulfilling immediate needs--GAO and others have raised concerns about the increasing reliance on contractors to perform agency missions. GAO's body of work shows that agencies face challenges with increased reliance on contractors to perform core agency missions, and these challenges are accentuated in contingency operations such as Iraq, in emergency situations such as Hurricane Katrina, or in cases where sufficient government personnel are not available. In making the decision to use contractors, agencies have experienced challenges such as: determining which functions and activities should be contracted out and which should not to ensure institutional capacity; developing a total workforce strategy to address the extent of contractor use and the appropriate mix of contractor and government personnel; identifying and distinguishing the roles and responsibilities of contractors and civilian and military personnel; and ensuring appropriate oversight, including addressing risks, ethics concerns, and surveillance needs. DOD's increased reliance on contractors to support forces deployed for military operations and to perform maintenance and other logistic support for weapon systems has highlighted challenges that DOD faces in managing this component of its total force. With regard to contractor support for deployed forces, DOD's primary challenges have been to provide effective management and oversight, including failure to follow planning guidance, an inadequate number of contract oversight personnel, failure to systematically capture and distribute lessons learned, and a lack of comprehensive training for military commanders and contract oversight personnel. These challenges have led to negative operational and monetary impacts at deployed locations. For example, several military commanders GAO met with in 2006 said their pre-deployment training did not provide them with sufficient information on the extent of contractor support that they would be relying on in Iraq and were therefore surprised by the substantial number of personnel they had to allocate to provide on-base escorts, convoy security, and other force protection support to contractors. Although DOD has taken some steps to address these issues, many of these issues remain a concern and additional actions are needed. With respect to weapon system support, the challenges have been to resolve questions about how much depot maintenance and other logistics work needs to be performed in-house and to what extent outsourcing for DOD logistics has been cost-effective. While DOD has a process for defining core maintenance capability, GAO has identified shortcomings with this process and found that core maintenance capability has not always been developed. Finally, although increased contractor reliance for maintenance and other logistics activities was justified by DOD based on the assumption that there would be significant cost savings, it is uncertain to what extent cost savings have occurred or will occur.
Medicare and Medicaid have consistently been targets for fraudulent conduct because of their size and complexity. Private health care insurance carriers are also vulnerable to fraud due to the immense volume of claims they receive and process. Those who commit fraud against public health insurers are also likely to engage in similar conduct against private insurers. The Coalition Against Insurance Fraud estimates that in 1997 fraud in the health care industry totaled about $54 billion nationwide, with $20 billion attributable to private insurers and $34 billion to Medicare and Medicaid. In addition to losses due to fraud, the Department of Health and Human Services’ OIG has reported that billing errors, or mistakes, made by health care providers were significant contributors to improperly paid health care insurance claims. The OIG defined billing errors as (1) providing insufficient or no documentation, (2) reporting incorrect codes for medical services and procedures performed, and (3) billing for services that are not medically necessary or that are not covered. For fiscal year 2000, the OIG reported that an estimated $11.9 billion in improper payments were made for Medicare claims. In a March 1997 letter to health care providers, the Department of Health and Human Services’ IG suggested that providers work cooperatively with the OIG to show that compliance can become a part of the provider culture. The letter emphasized that such cooperation would ensure the success of initiatives to identify and penalize dishonest providers. One cooperative effort between the IG and health care groups resulted in the publication of model compliance programs for health care providers. The OIG encourages providers to adopt compliance principles in their practice and has published specific guidance for individual and small group physician practices as well as other types of providers to help them design voluntary compliance programs. A voluntary compliance program can help providers recognize when their practice has submitted erroneous claims and ensure that the claims they submit are true and accurate. In addition, the OIG has incorporated its voluntary self-disclosure protocolinto the compliance program, under which sanctions may be mitigated if provider-detected violations are reported voluntarily. Evaluation and management services refer to work that does not involve a medical procedure—the thinking part of medicine. The key elements involved in evaluation and management services are (1) obtaining the patient’s medical history, (2) performing a physical examination, and (3) making medical decisions. Medical decisions include determining which diagnostic tests are needed, interpreting the results of the diagnostic tests, making the diagnosis, and choosing a course of treatment after discussing the risks and benefits of various treatment options with the patient. These decisions might involve work of low, medium, or high complexity. Each of the key elements of evaluation and management services contains components that indicate the amount of work done. For example, a comprehensive medical history would involve (1) determining a patient’s chief complaint, (2) tracing the complete history of the patient’s present illness, (3) questioning other observable characteristics of the patient’s present condition and overall state of health (review of systems), (4) obtaining a complete medical history for the patient, (5) developing complete information on the patient’s social history, and (6) recording a complete family history. A more focused medical history would involve obtaining only specific information relating directly to the patient’s symptoms at the time of the visit. Providers and their staffs use identifying codes defined in an American Medical Association publication, titled Current Procedural Terminology (CPT), to bill for outpatient evaluation and management services performed during office visits. The CPT is a list of descriptive terms and identifying codes for reporting all standard medical services and procedures performed by physicians. Updated annually, it is the most widely accepted nomenclature for reporting physician procedures and services under both government and private health insurance programs. The CPT codes reported to insurers are used in claims processing, and they form the basis for compensating providers commensurate with the level of work involved in treating a patient. Accordingly, the higher codes, which correspond to higher payments, are used when a patient’s problems are numerous or complex or pose greater risk to the patient, or when there are more diagnostic decisions to be made or more treatment options to be evaluated. The CPT has two series of evaluation and management codes for outpatient office visits, one series for new patient visits and another for established patient visits. Each series of CPT codes has five levels that correspond to the difficulty and complexity of the work required to address a patient’s needs. The code selected by the provider to describe the services performed in turn determines the amount the provider will be paid for the visit. For example, under the current Medicare fee schedule for the District of Columbia and surrounding suburbs, a provider would be paid $39.30 for a new patient who is determined to have received services commensurate with a level 1 visit and $182.52 for a level 5 visit. Similarly, payments for level 1 and level 5 visits by an established patient are $22.34 and $128.03, respectively. The two workshops we attended provided certain advice that is inconsistent with the OIG guidance and that, if followed, could result in violations of criminal and civil statutes. Specifically, at one workshop the consultant suggested that when providers identify an overpayment from an insurance carrier, they should not report or refund the overpayment. Furthermore, consultants at both workshops suggested that providers attempt to receive a higher-than-earned level of compensation by making it appear, through documentation, that a patient presented more complex problems than he or she actually did. Additionally, one consultant suggested that providers limit the services offered to patients with low- paying insurance plans, such as Medicaid, and that they discourage such patients from using the provider’s services by offering appointments to them only in time slots that are inconvenient to other patients. One workshop focused on the merits of implementing voluntary compliance programs. The consultant who presented this particular discussion explained that a baseline self-audit to determine the level of compliance with applicable laws, rules, and regulations is a required step in creating a voluntary compliance program. Focusing on “how to audit- proof your practice” and avoid sending out “red flags,” the consultant advised providers not to report or refund overpayments they identify as a result of the self-audit. The consultant claimed that reporting or refunding the overpayment would raise a red flag that could result in an audit or investigation. When asked the proper course of action to take when an overpayment is identified, the consultant responded that providers are required to report and refund overpayments. He said, however, that instead of refunding overpayments, physician practices generally fix problems in their billing systems that cause overpayments while “keeping their mouths shut” and “getting on with life.” Such conduct, however, could result in violations of criminal statutes. According to the most recent OIG Medicare audit report, the practice of billing for services that are not medically necessary or that lack sufficient diagnostic justification is a serious problem in the health insurance system. The OIG estimated that during fiscal year 2000, $5.1 billion was billed to insurance plans for unnecessary services. Intentionally billing for services that are not medically necessary may result in violations of law. Moreover, based on advice given at workshops that we attended during this investigation, we are concerned that insurers may be paying for tests and procedures that are not medically necessary because physicians may be intentionally using such services to justify billing for evaluation and management services at higher code levels than actual circumstances warrant. Specifically, two consultants advised that documentation of evaluation and management services performed can be used to create, for purposes of an audit, the appearance that medical issues confronted at the time of a patient’s office visit were of a higher level of difficulty than they actually were. For example, a consultant at one workshop urged practitioners to enhance revenues by finding creative ways to justify bills for patient evaluation and management services at high code levels. He advised that one means of justifying bills at high code levels is to have nonphysician health professionals perform numerous procedures and tests. To illustrate his point, the consultant discussed the hypothetical case of a cardiologist who examines a patient in an emergency room where tests are performed and the patient is discharged after the cardiologist determines that the patient has a minor problem or no problem at all. To generate additional revenue, the consultant suggested that the cardiologist tell the patient to come to his office for a complete work-up, even when the cardiologist knows that the patient does not have a problem. He advised that the work-up be performed during two separate office visits and that the cardiologist not be involved in the first visit. Instead, a nurse is to perform tests, draw blood, and take a medical history. During the second visit, the cardiologist is to consult with the patient to discuss the results of the tests and issues such as life style. The consultant indicated that the cardiologist could bill for a level 4 visit, indicating that a relatively complex medical problem was encountered at the time of the visit. The consultant made clear that the cardiologist did not actually confront a complex problem during the visit because the cardiologist already knew, based on the emergency room tests and examination, that the patient did not have such a problem. Another consultant focused on how to develop the highest code level for health care services and create documentation to avoid having an insurer change the code to a lower one. The consultant engaged in “exercises” with participants designed to suggest that coding results are “arbitrary” determinations. His emphasis was not that the code selection be correct or even that the services be performed, but rather that it is important to create a documentary basis for the codes billed in the event of an audit. He explained that in the event of an audit, the documentation created is the support for billing for services at higher code levels than warranted. During the exercises, program participants—all were physicians except for our criminal investigator—were provided a case study of an encounter with a generally healthy 14-year-old patient with a sore throat. Participants were asked to develop the evaluation and management service code for the visit that diagnosed and treated the patient’s laryngitis. The consultant suggested billing the visit as a level 4 encounter, supporting the code selection by documenting every aspect of the medical history and physical examination, and mechanically counting up the work documented to make the services performed appear more complicated than they actually were. All of the participants indicated that they would have coded the visit at a lower level than that suggested by the consultant, who stated that “documentation has its rewards.” The consultant explained that in the event of an audit, the documentation created would be the basis for making it appear that a bill at a high code level was appropriate. One workshop consultant encouraged practices to differentiate between patients based on the level of benefits paid by their insurance plans. He identified the Medicaid program in particular as being the lowest and slowest payer, and urged the audience to stop accepting new Medicaid patients altogether. The consultant also suggested that the audience members limit the services they provide to established Medicaid patients and offer appointments to them only in hard-to-fill time slots. Workshop participants were advised to offer better-insured patients follow-up services that are intended to affiliate a patient permanently with the practice. However, the consultant suggested that physicians may decide not to offer such services to Medicaid patients. He sent a clear message to his audience that a patient’s level of care should be commensurate with the level of insurance benefits available to the patient. This advice raises two questions: First, are medically necessary services not being made available to Medicaid patients? Second, are better-paying insurance plans being billed for services that are not medically necessary but performed for the purpose of affiliating patients from such plans to a medical practice? Program participants were further urged to see at least one new patient with a better-paying insurance plan each day. The consultant pointed out that, by seeing one new patient per day, a provider can increase revenue by $6,000 per year because the fee for a new patient visit is about $30 more than the fee for an established patient visit. He said that over time such measures would result in reducing the percentage of Medicaid patients seen regularly in the practice and increase the number of established patients with better-paying insurance. The consultant also recommended that providers limit the number of scheduled appointment slots available to Medicaid patients on any given day and that Medicaid patients be offered appointments only in hard-to-fill time slots rather than in the “best,” or convenient, time slots. He suggested that insurance information and new patient status be used to allocate the best time slots to the best payers. He identified this approach as “rationing,” which he described as “not real discrimination,” but “somewhat discrimination.” While neither the Social Security Act nor Medicaid regulations require physicians to accept Medicaid patients, title VI of the Civil Rights Act of 1964 prohibits discrimination based upon race, color, or national origin in programs that receive federal financial assistance. The Department of Health and Human Services, which administers the Medicare and Medicaid programs, takes the position that the nondiscrimination requirement of title VI applies to doctors in private offices who treat and bill for Medicaid patients. While the conduct promoted by the consultant is not overt discrimination on the basis of race, color, or national origin, under certain circumstances, such conduct might disproportionately harm members of protected groups and raise questions about title VI compliance. Moreover, even if the conduct promoted is not unlawful, it raises serious concerns about whether it would result in depriving Medicaid patients of medically necessary services, and whether better-paying insurance plans are billed for services that are not medically necessary but performed for the purpose of affiliating patients to a particular medical practice. Advice offered to providers at workshops and seminars has the potential for easing program integrity problems in the Medicare and Medicaid programs by providing guidance on billing codes for evaluation and management services. However, if followed, the advice provided at two workshops we attended would exacerbate integrity problems and result in unlawful conduct. Moreover, the advice raises concerns that some payments classified by the OIG as improperly paid health care insurance claims may stem from conscious decisions to submit inflated claims in an attempt to increase revenue. We have discussed with the Department of Health and Human Services’ OIG the need to monitor workshops and seminars similar to the ones we attended. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will make copies of the report available to interested congressional committees and the Secretary of the Department of Health and Human Services. This report will also be available at www.gao.gov. If you have any questions about this investigation, please call me at (202) 512-7455 or Assistant Director William Hamel at (202) 512-6722. Senior Analyst Shelia James, Assistant General Counsel Robert Cramer, and Senior Attorney Margaret Armen made key contributions to this report.
This report investigates health care consultants who conduct seminars or workshops that offer advice to health care providers on ways to enhance revenue and avoid audits or investigations. GAO attended several seminars and workshops offered by these consultants. GAO sought to determine whether the consultants were providing advice that could result in improper or excessive claims to Medicare, Medicaid, other federally funded health plans, and private health insurance carriers. GAO found that some advice was inconsistent with guidance provided by the Department of Health and Human Services' Office of Inspector General (OIG). Such advice could result in violations of both civil and criminal statutes.
In September 1993, the National Performance Review recommended an overhaul of DOD’s temporary duty (TDY) travel system. In response, DOD created the DOD Task Force to Reengineer Travel to examine the travel process. The task force found that the current process was expensive to administer and was neither customer nor mission oriented with the net result being a travel process that was costly, inefficient, fragmented, and did not support DOD’s needs. On December 13, 1995, the Under Secretary of Defense for Acquisition and Technology and the Under Secretary of Defense (Comptroller)/Chief Financial Officer issued a memorandum, “Reengineering Travel Initiative,” establishing the PMO-DTS to acquire travel services that would be used DOD-wide. Additionally, in a 1997 report to Congress, the DOD Comptroller pointed out that the existing DOD TDY travel system was never designed to be an integrated system. The report stated that because there was no centralized focus on the department’s travel practices, the travel policies were issued by different offices and the process had become fragmented and “stovepiped.” The report further noted that there was no vehicle in the current structure to overcome these deficiencies, as no one individual within the department had specific responsibility for management control of DOD TDY travel. DOD management and oversight of the DTS program has varied over the years. DTS was designated a “Special Interest” program in 1995. It retained this status until May 2002 when it was designated a major automated information system, with the Defense Finance and Accounting Service (DFAS) being designated as the lead component for the program. This meant that DFAS was responsible for the management oversight of DTS program acquisition, including DTS compliance with the required DOD acquisition guidance. In September 2003, DOD finalized its economic analysis for DTS in preparation for a milestone decision review. The highlights of the economic analysis are shown in table 1. In December 2003, the DOD Chief Information Officer granted approval for DTS to proceed with full implementation throughout the department. In October 2005, DOD established the Business Transformation Agency (BTA) to advance DOD-wide business transformation efforts, particularly with regard to business systems modernization. DOD believes it can better address managing defensewide business transformation, which includes planning, management, organizational structures, and processes related to all key business areas, by first transforming business operations to support the warfighter, while also enabling financial accountability across DOD. BTA operates under the authority, direction, and control of the Under Secretary of Defense for Acquisition, Technology, and Logistics, who is the vice chair of the Defense Business Systems Management Committee— which serves as the highest ranking governing body for business systems modernization activities. Among other things, BTA includes a Defense Business Systems Acquisition Executive who is responsible for centrally managing 28 DOD-wide business projects, programs, systems, and initiatives—one of which is DTS. In October 2004, responsibility for the policies and procedures related to the management of commercial travel throughout DOD transferred to the Office of the Under Secretary of Defense (Personnel and Readiness). Our analysis of the September 2003 DTS economic analysis found that two key assumptions used to estimate cost savings were not based on reliable information. Consequently, the economic analysis did not serve to help ensure that the funds invested in DTS were used in an efficient and effective manner. Two primary areas represented the majority of the over $56 million of estimated annual net savings DTS was expected to realize— personnel savings and reduced CTO fees. However, the estimates used to generate these savings were unreliable. Further, DOD did not effectively implement the policies relating to developing economic analyses for programs such as DTS. Effective implementation of these policies should have highlighted the problems that we found and allowed for appropriate adjustments so that the economic analysis could have served as a useful management tool in making funding decisions related to DTS—which is the primary purpose of this analysis. While the department’s system acquisition criteria do not require that a new economic analysis be prepared, the department’s business system investment management structure provides an opportunity for DOD management to assess whether DTS is meeting its planned cost, schedule, and functionality goals. The economic analysis estimated that the annual personnel savings was over $54 million, as shown in table 2. As shown in table 2, approximately 45 percent of the estimated savings, or $24.2 million was attributable to the Air Force and Navy. The assumption behind the personnel savings computation was that there would be less manual intervention in the processing of travel vouchers for payment, and therefore fewer staff would be needed. However, based on our discussions with Air Force and Navy DTS program officials, it is questionable as to how the estimated savings will be achieved. Air Force and Navy DTS program officials stated that they did not anticipate a reduction in the number of personnel with the full implementation of DTS, but rather the shifting of staff to other functions. According to DOD officials responsible for reviewing economic analyses, while shifting personnel to other functions is considered a benefit, it should be considered an intangible benefit rather than tangible dollar savings since the shifting of personnel does not result in a reduction of DOD expenditures. Also, as part of the Navy’s overall evaluation of the economic analysis, program officials stated that “the Navy has not identified, and conceivably will not recommend, any personnel billets for reduction.” Finally, the Naval Cost Analysis Division (NCAD) October 2003 report on the economic analysis noted that it could not validate approximately 40 percent of the Navy’s total costs, including personnel costs, in the DTS life-cycle cost estimates because credible supporting documentation was lacking. The report also noted that the PMO-DTS used unsound methodologies in preparing the DTS economic analysis. The extent of personnel savings for the Army and defense agencies, which are reported as $16 million and $6.3 million respectively, is also unclear. The Army and many defense agencies use DFAS to process their travel vouchers, so the personnel savings for the Army and the defense agencies were primarily related to reductions in DFAS’s costs. In discussions with DFAS officials, they were unable to estimate the actual personnel savings that would result since they did not know (1) the number of personnel, like those at the Air Force and Navy, that would simply be transferred to other DFAS functions or (2) the number of personnel that could be used to avoid additional hiring. For example, DFAS expects that some of the individuals assigned to support the travel function could be moved to support its ePayroll program. Since these positions would need to be filled regardless of whether the travel function is reduced, transferring personnel from travel to ePayroll would reduce DOD’s overall costs since DFAS would not have to hire additional individuals. According to the September 2003 economic analysis, DOD expected to realize annual net savings of $31 million through reduced fees paid to the CTOs because the successful implementation of DTS would enable the majority of airline tickets to be acquired with either no or minimal intervention by the CTOs. These are commonly referred to as “no touch” transactions. However, DOD did not have a sufficient basis to estimate the number of transactions that would be considered “no touch” since (1) the estimated percentage of transactions that can be processed using the “no touch” was not supported and (2) the analysis did not properly consider the effects of components that use management fees, rather than transaction fees, to compensate the CTOs for services provided. The weaknesses we identified with the estimating process raise serious questions as to whether DOD will realize substantial portions of the estimated annual net savings of $31 million. DOD arrived at the $31 million of annual savings in CTO fees by estimating that 70 percent of all DTS airline tickets would be considered “no touch” and then multiplying these tickets by the savings per ticket in CTO fees. However, a fundamental flaw in this analysis was that the 70 percent assumption had no solid basis. We requested, but the PMO-DTS could not provide, any analysis of travel data to support the assertion. Rather, the sole support provided by the PMO-DTS was an article in a travel industry trade publication. The article was not based on information related to DTS, but rather on the experience of one private sector company. The economic analysis assumed that DOD could save about $13.50 per “no touch” ticket. Since that analysis, DOD has awarded one contract that specifically prices transactions using the same model as that envisioned by the economic analysis. This contract applies to the Defense Travel Region 6 travel area. During calendar year 2005, the difference in fees for “no touch” transactions and the transactions supported by the current process averaged between $10 and $12, depending on when the fees were incurred because the contract rates changed during 2005. In analyzing travel voucher data for Region 6 for calendar year 2005, we found that the reported “no touch” rate was, at best 47 percent—far less than the 70 percent envisioned in the economic analysis. PMO-DTS program officials stated they are uncertain as to why the anticipated 70 percent “no touch” was not being achieved. According to PMO-DTS program officials, this could be attributed, in part, to the DOD travelers being uncomfortable with the system and making reservations without using a CTO. Although this may be one reason, other factors may also affect the expected “no touch” fee. For example, we were informed that determining the airline availability and making the associated reservation can be accomplished, in most cases, rather easily. However, obtaining information related to hotels and rental cars and making the associated reservation can be more problematic because of the limitations in the data that DTS is able to obtain from its commercial sources. Accordingly, while a traveler may be able to make a “no touch” reservation for the airline portion of the trip, the individual may need to contact the CTO in order to make hotel or rental car reservations. When this occurs, rather than paying a “no touch” fee to the CTO, DOD ends up paying a higher fee, which eliminates the savings estimated in the economic analysis. The economic analysis assumed that (1) DOD would be able to modify the existing CTO contracts to achieve a substantial reduction in fees paid to a CTO when DTS was fully implemented across the department and (2) all services would use the fee structure called for in the new CTO contracts. The first part of the assumption is supported by results of the CTO contract for DOD Region 6 travel. The fees for the DTS “no touch” transactions were at least $10 less than if a CTO was involved in the transactions. However, to date, the department has experienced difficulty in awarding new contracts with the lower fee structure. On May 10, 2006, the department announced the cancellation of the solicitation for a new contract. According to the department, it decided that the solicitation needed to be rewritten based on feedback from travel industry representatives at a March 28, 2006, conference. The department acknowledged that the “DTS office realized its solicitation didn’t reflect what travel agency services it actually needed.” The department would not say how the solicitation would be refined, citing the sensitivity of the procurement process. The department also noted that the new solicitation would be released soon, but provided no specific date. The economic analysis assumed that the Navy would save about $7.5 million, almost 25 percent, of the total savings related to CTO fees once DTS is fully deployed. The economic analysis averaged the CTO fees paid by the Army, the Air Force, and the Marine Corps—which amounted to about $18.71 per transaction—to compute the savings in Navy CTO fees. Using these data, the assumption was made in the economic analysis that a fee of $5.25 would be assessed for each ticket, resulting in an average savings of $13.46 per ticket for the Navy ($18.71 minus $5.25). While this approach may be valid for the organizations that pay individual CTO fees, it may not be representative for organizations such as the Navy that pay a management fee. The management fee charged the Navy is the same regardless of the involvement of the CTO—therefore, the reduced “no touch” fee would not apply. We were informed by Navy DTS program officials that they were considering continuing the use of management fees after DTS is fully implemented. According to Navy DTS program officials, they paid about $14.5 million during fiscal year 2005 for CTO management fees, almost $19 per ticket for approximately 762,700 tickets issued. Accordingly, even if the department arrives at a new CTO contract containing the new fee structure or fees similar to those of Region 6, the estimated savings related to CTO fees for the Navy will not be realized if the Navy continues to use the management fee concept. Effective implementation of DOD guidance would have detected the types of problems discussed above and resulted in an economic analysis that would have accomplished the stated objective of the process—to help ensure that the funds invested in DTS were used efficiently and effectively. DOD policy and OMB guidance require that an economic analysis be based on facts and data and be explicit about the underlying assumptions used to arrive at estimates of future benefits and costs. Since an economic analysis deals with costs and benefits occurring in the future, assumptions must be made to account for uncertainties. DOD policy recognizes this and provides a systematic approach to the problem of choosing the best method of allocating scarce resources to achieve a given objective. A sound economic analysis recognizes that there are alternative ways to meet a given objective and that each alternative requires certain resources and produces certain results. The purpose of the economic analysis is to give the decision maker insight into economic factors bearing on accomplishing the objectives. Therefore, it is important to identify factors, such as cost and performance risks and drivers, which can be used to establish and defend priorities and resource allocations. The DTS economic analysis did not comply with the DOD policy, and the weaknesses we found should have been detected had the DOD policy been effectively implemented. The PMO-DTS had adequate warning signs of the potential problems associated with not following the OMB and DOD guidance for developing an effective economic analysis. For example, as noted earlier, the Air Force and Navy provided comments when the economic analysis was being developed that the expected benefits being claimed were unrealistic. Just removing the benefits associated with personnel savings from the Air Force and Navy would have reduced the overall estimated program cost savings by almost 45 percent. This would have put increased pressure on the credibility of using a 70 percent “no touch” utilization rate. The following are examples of failures to effectively implement the DOD policy on conducting economic analyses and the adverse effects on the DTS economic analysis. The DTS life-cycle cost estimates portion of the economic analysis was not independently validated as specified in DOD’s guidance. PMO-DTS officials acknowledged that there was not an independent assessment of the DTS life-cycle cost estimates. However, they noted that the department’s Office of Program Analysis and Evaluation had provided comments on the economic analysis. Program Analysis and Evaluation officials informed us that they did not perform an independent assessment of the DTS economic analysis because the data were not available to validate the reliability of that analysis. Program Analysis and Evaluation officials also noted that they had raised similar concerns about the July 2003 economic analysis, but those issues had not been resolved when the September 2003 economic analysis was provided for their review. Because the September 2003 DTS life-cycle cost estimates were not independently assessed, the department did not have reasonable assurance that the reported estimates were realistic, that the assumptions on which the analysis was based were valid, or that the estimated rate of return on the investment could reasonably be expected to be realized. The September 2003 DTS economic analysis did not undertake an assessment of the effects of the uncertainty inherent in the estimates of benefits and costs, as required by DOD and OMB guidance. Because an economic analysis uses estimates and assumptions, it is critical that a sensitivity analysis be performed to understand the effects of the imprecision in both underlying data and modeling assumptions. This analysis is required since the estimates of future benefits and costs are subject to varying degrees of uncertainty. For example, according to DOD officials, the number of travel transactions has remained relatively stable over the years. On the other hand, as discussed previously, the number of transactions that can be processed as “no touch” is unknown. Sensitivity analysis refers to changing the value of a given variable in a model to gauge the effect of change on model results. More importantly, it identifies key elements—data and assumptions—as discussed above—and varies a single element while holding the others constant to determine what amount of change in that element is required to raise or lower the resulting dominant benefit and cost elements by a set amount. In this way, data and assumptions can be risk-ranked for decisionmaking and auditing. In the case of DTS, we requested that the PMO-DTS determine the effects of a change in “no touch” transaction percentage. With all other factors remaining the same, DTS would have to achieve a 35 percent “no touch” transaction rate just to break even—where tangible costs and benefits are equal. Had DOD performed such an analysis, it would have understood that depending solely on an industry trade publication as its support for the “no touch” transaction percentage had major implications on the potential savings. Although the September 2003 economic analysis was not based on supportable data, the department’s criteria do not require that a new economic analysis be prepared. DTS has already completed all of the major milestones related to a major automated system, which require that an economic analysis be prepared or at least updated to reflect the current assumptions and the related costs and benefits. However, the fiscal year 2005 defense authorization act requires the periodic review, but not less than annually, of every defense business system investment. Further, the department’s April 2006 guidance notes that the annual review process “provides follow-up assurance that information technology investments, which have been previously approved and certified, are managed properly, and that promised capabilities are delivered on time and within budget.” If effectively implemented, this annual review process provides an excellent opportunity for DOD management to assess whether DTS is meeting its planned cost, schedule, and functionality goals. Going forward, such a review could serve as a useful management tool in making funding and other management decisions related to DTS. Our September 2005 testimony and January 2006 report noted the challenge facing the department in attaining the anticipated DTS’s utilization. While DOD has acknowledged the underutilization, we found that across DOD, the department does not have reasonable quantitative metrics to measure the extent to which DTS is actually being used. Presently, the reported DTS utilization is based on a DTS Voucher Analysis Model that was developed in calendar year 2003 using estimated data, but over the years has not been completely updated with actual data. While the military services have initiated actions to help increase the utilization of DTS, they pointed out that ineffective DTS training is a contributing factor to the lower than expected usage rate by the military services. The DTS Voucher Analysis Model was prepared in calendar year 2003 and based on airline ticket and voucher count data that were reported by the military services and defense agencies, but the data were not verified or validated. Furthermore, PMO-DTS officials acknowledged that the model has not been completely updated with actual data as DTS continues to be implemented at the 11,000 sites. We found that the Air Force is the only military service that submits monthly metrics to the PMO-DTS officials for their use in updating the DTS Voucher Analysis Model. Rather than reporting utilization based on individual site system utilization data, the PMO-DTS continues to rely on outdated information in the reporting of DTS utilization to DOD management and Congress. We have previously reported that best business practices indicate that a key factor of project management and oversight is the ability to effectively monitor and evaluate a project’s actual performance against what was planned. In order to perform this critical task, best business practices require the adoption of quantitative metrics to help measure the effectiveness of a business system implementation and to continually measure and monitor results, such as system utilization. This lack of accurate and pertinent utilization data hinders management’s ability to monitor its progress toward the DOD vision of DTS as the standard travel system, as well as to provide consistent and accurate data to Congress. With the shift of the DTS program to BTA, which now makes DTS an enterprisewide endeavor, improved metrics and training are essential if DTS is to be DOD’s standard, integrated, end-to-end travel system for business travel. Table 3 presents DTS’s reported percentage of utilization during the period October 2005 through April 2006. PMO-DTS officials calculated these utilization percentages by comparing the actual number of travel vouchers processed through DTS to the outdated universe of travel transaction data per the model, as described previously. Because the PMO-DTS was not able to identify the total number of travel vouchers that should have been processed through DTS (total universe of travel vouchers), the utilization percentages shown in table 3 may be over- or understated. PMO-DTS program officials confirmed that the reported utilization data were not based on complete data because the department did not have comprehensive information to identify the universe or the total number of travel vouchers that should be processed through DTS. PMO-DTS program and DTS military service officials agreed that the actual DTS utilization rate should be calculated by comparing actual vouchers being processed in DTS to the total universe of vouchers that should be processed in DTS. The universe would exclude those travel vouchers that cannot be processed through DTS, such as those related to permanent change of station travel. The Air Force was the only military service that attempted to obtain data on (1) the actual travel vouchers processed through DTS and (2) those travel vouchers eligible to be processed through DTS, but were not. These data were site specific. For example, during the month of December 2005, the PMO-DTS reported that at Wright-Patterson Air Force Base, 2,880 travel vouchers were processed by DTS, and the Air Force reported that another 2,307 vouchers were processed through the legacy system—the Reserve Travel System (RTS). Of those processed through RTS, Air Force DTS program officials stated that 338 travel vouchers should have been processed through DTS. DTS Air Force program officials further stated that they submitted to the PMO-DTS the number of travel vouchers processed through RTS each month. These data are used by the PMO-DTS to update the DTS Voucher Analysis Model. However, neither the Air Force nor the PMO-DTS have verified the accuracy and reliability of the data. Therefore, the accuracy of the utilization rates reported for the Air Force by the PMO-DTS is not known. As shown in table 3, PMO-DTS officials reported utilization data for the Air Force from a low of 29 percent (January 2006) to a high of 48 percent (November 2005) during the 7-month period ending April 2006. Because Army and Navy DTS program officials did not have the information to identify the travel transactions that should have been processed through DTS, the Army and Navy did not have a basis for evaluating DTS utilization at their respective military locations and activities. Furthermore, Navy DTS program officials indicated that the utilization data that the PMO-DTS program officials reported for the Navy were not accurate. According to Navy DTS program officials, the Navy’s primary source of utilization data was the monthly metrics reports provided by the PMO-DTS, but Navy DTS program officials questioned the accuracy of the Navy utilization reports provided by the PMO-DTS. For example, the Navy PMO-DTS utilization site report has a site name of Ballston, Va.; however, Ballston, Va. is not listed on the map site names on the DTS contractor’s database. As a result, the PMO-DTS Navy utilization report for this location indicates no usage every month. Our analysis indicated that this was 1 of at least 33 similar instances where no usage was reported for a nonexistent location. Navy DTS program officials stated that an effort is underway to “re-map” all Navy organizations to the correct site name, but as of June 2006 this effort had not been completed. Another example indicates the inconsistencies that exist in the different information used by the Navy and the PMO-DTS program officials to report utilization rates for the Navy. The PMO-DTS program officials reported that the Navy had a total of 9,400 signed, original vouchers processed through DTS during December 2005; however, this is less than the 10,523 reported by the DTS contractor for the same month. According to Navy DTS program officials, they have not been able to confirm whether either figure is correct. Since the number of DTS vouchers is required to calculate utilization, the Navy is unable to determine the accuracy of the utilization metrics reported by the PMO-DTS officials, as shown in table 3. While the military services have issued various memorandums that direct or mandate the use of DTS to the fullest extent possible at those sites where DTS has been deployed, resistance still exists. As highlighted below, deployed sites are still using non-DTS systems, or legacy systems, to process TDY travel. The Army issued a memorandum in September 2004 directing each Army installation to fully disseminate DTS to all travelers within 90 to 180 days after Initial Operating Capability (IOC) at each installation. Subsequently in September 2005, DFAS officials reported that 390,388 travel vouchers were processed through the Army’s legacy system—the Windows Integrated Automated Travel System, but DFAS officials could not provide a breakout of how many of the 390,388 travel vouchers should have been processed through DTS. The Air Force issued a memorandum in November 2004 that stressed the importance of using DTS once it was implemented at an installation. The Air Force memorandum specifically stated that business, local, and group travel vouchers should be electronically processed through DTS and that travel claims should not be submitted to the local finance office for processing. However, we found that Air Force travelers continued to process travel claims through legacy systems, such as RTS. For example, during the month of November 2005, the Air Force reported that 3,277 business vouchers, 1,875 local vouchers, and 1,815 group vouchers were processed through RTS that should have been processed through DTS. Additionally, a DFAS internal review analyzed Air Force vouchers during the period January 2005 through June 2005, at locations where DTS was deployed, and found that Air Force travelers used legacy systems to process 79 percent of all routine TDY transactions. The Navy issued a memorandum in May 2005 that directed the use of DTS to generate travel orders throughout all Navy locations. Navy DTS program officials reported in an April 2006 briefing that 18,300 travel vouchers were processed in DTS during the month of March 2006, but that over 90,000 travel vouchers were still being processed monthly through the Integrated Automated Travel System—a legacy system. Thus, despite memoranda issued by the military services, it appears that DTS continues to be underutilized by the military services. As discussed in our September 2005 testimony and January 2006 report, the unnecessary continued use of the legacy travel systems results in the inefficient use of funds because the department is paying to operate and maintain duplicative systems that perform the same function—travel. Besides the memorandums, DOD is taking other actions to increase DTS utilization as the following examples illustrate. The Assistant Secretary of the Army for Financial Management (Financial & Accounting Oversight Directorate) holds monthly Senior Focus Group meetings with the installation leadership of major commands to discuss DTS utilization issues and possible corrective actions. The Navy conducts quarterly video and telephone conferences with major commands and contacts commands with low usage to determine the causes for low DTS usage. The PMO-DTS conducts monthly working group meetings with the military service and defense agency DTS program officials to discuss DTS functionality issues and concerns, DTS usage, and other related DTS issues. Although the military services have issued various memorandums aimed at increasing the utilization of DTS, the military service DTS program officials all pointed to ineffective training as a primary cause of DTS not being utilized to a far greater extent. The following examples highlight the concerns raised by the military service officials. Army DTS program officials emphasized that the DTS system is complex and the design presents usability challenges for users—especially for first- time or infrequent users. They added that a major concern is that there is no PMO-DTS training for existing DTS users as new functionality is added to DTS. These officials stated that the PMO-DTS does not do a good job of informing users about functionality changes made to the system. We inquired if the Help Desk was able to resolve the users’ problems, and the Army DTS officials simply stated “no.” The Army officials further pointed out that it would be beneficial if the PMO-DTS improved the electronic training on the DTS Web site and made the training documentation easier to understand. Also, improved training would help infrequent users adapt to system changes. The Army officials noted that without some of these improvements to resolve usability concerns, DTS will continue to be extremely frustrating and cumbersome for travelers. Navy DTS program officials stated that DTS lacks adequate user/traveler training. The train-the-trainer concept of training system administrators who could then effectively train all their travelers has been largely unsuccessful. According to Navy officials, this has resulted in many travelers and users attempting to use DTS with no or insufficient training. The effect has frustrated users at each step of the travel process and has discouraged use of DTS. Air Force officials stated that new DTS system releases are implemented with known problems, but the sites are not informed of the problems. Workarounds are not provided until after the sites begin encountering problems. Air Force DTS program officials stated that DTS releases did not appear to be well tested prior to implementation. Air Force officials also stated that there was insufficient training on new functionality. PMO- DTS and DTS contractor program officials believed that conference calls to discuss new functionality with the sites were acceptable training, but Air Force officials did not agree. The Air Force finance office was expected to fully comprehend the information received from those conference calls and provide training on the new functionality to users/approvers, but these officials stated that this was an unrealistic expectation. Our September 2005 testimony and January 2006 report noted problems with DTS’s ability to properly display flight information and traced those problems to inadequate requirements management and testing. DOD stated that it had addressed those deficiencies and in February 2006, we again tested the system to determine whether the stated weaknesses had been addressed. We found that similar problems continue to exist. We also identified additional deficiencies in DTS’s ability to display flights that comply with the Fly America Act. DTS’s inability to display flights that comply with the Fly America Act places the traveler who purchases a ticket or the individual authorizing, certifying, or disbursing a payment made when a ticket is paid for directly by DOD through a centrally billed account at unnecessary risk of personal liability. Once again, these problems can be traced to ineffective requirements management and testing processes. Properly defined requirements are a key element in systems that meet their cost, schedule, and performance goals since they define (1) the functionality that is expected to be provided by the system and (2) the quantitative measures by which to determine through testing whether that functionality is operating as expected. We briefed PMO-DTS officials on the results of our tests and in May 2006 the officials agreed that our continued concerns about the proper display of flight information and compliance with the Fly America Act were valid. PMO-DTS officials stated that the DTS technology refresh, which is to be completed in September 2006, should address some of our concerns. While these actions are a positive step forward, they do not address the fundamental problem that DTS’s requirements are still ambiguous and conflicting—a primary cause of the previous problems. Until a viable requirements management process is developed and effectively implemented, the department (1) cannot develop an effective testing process and (2) will not have reasonable assurance the project risks have been reduced to acceptable levels. In our earlier testimony and report, we noted that DOD did not have reasonable assurance that the flights displayed met the stated DOD requirements. Although DOD stated in each case that our concerns had been addressed, subsequent tests found that the problems had not been corrected. Requirements represent the blueprint that system developers and program managers use to design, develop, and acquire a system. Requirements should be consistent with one another, verifiable, and directly traceable to higher-level business or functional requirements. It is critical that requirements be carefully defined and that they flow directly from the organization’s concept of operations (how the organization’s day- to-day operations are or will be carried out to meet mission needs). Improperly defined or incomplete requirements have been commonly identified as a cause of system failure and systems that do not meet their cost, schedule, or performance goals. Requirements represent the foundation on which the system should be developed and implemented. As we have noted in previous reports, because requirements provide the foundation for system testing, significant defects in the requirements management process preclude an entity from implementing a disciplined testing process. That is, requirements must be complete, clear, and well documented to design and implement an effective testing program. Absent this, an organization is taking a significant risk that its testing efforts will not detect significant defects until after the system is placed into production. Our February 2006 analysis of selected flight information disclosed that DOD still did not have reasonable assurance that DTS displayed flights in accordance with its stated requirements. We analyzed 15 U.S. General Services Administration (GSA) city pairs, which should have translated into 246 GSA city pair flights for the departure times selected. However, we identified 87 flights that did not appear on one or more of the required listings based on the DTS requirements. For instance, our analysis identified 44 flights appearing on other DTS listings or airline sites that did not appear on the 9:00 am DTS listing even though those flights (1) met the 12-hour flight window and (2) were considered GSA city pair flights—two of the key DTS requirements the system was expected to meet. After briefing PMO officials on the results of our analysis in February 2006, the PMO-DTS employed the services of a contractor to review DTS to determine the specific cause of the problems and recommend solutions. In a March 2006 briefing, the PMO-DTS acknowledged the existence of the problems, and identified two primary causes. First, part of the problem was attributed to the methodology used by DTS to obtain flights from the Global Distribution System (GDS). The PMO-DTS stated that DTS was programmed to obtain a “limited” amount of data from GDS in order to reduce the costs associated with accessing GDS. This helps to explain why flight queries we reviewed did not produce the expected results. To resolve this particular problem, the PMO-DTS proposed increasing the amount of data obtained from GDS. Second, the PMO-DTS acknowledged that the system testing performed by the contractor responsible for developing and operating DTS was inadequate and, therefore, there was no assurance that DTS would provide the data in conformance with the stated requirements. This weakness was not new, but rather reconfirms the concerns discussed in our September 2005 testimony and January 2006 report related to the testing of DTS. Our analysis also found that DOD did not have reasonable assurance that the system displayed flights in compliance with the requirements of the Fly America Act. In 1996, Congress assigned the Administrator, GSA, the responsibility to determine the situations for which appropriated funds could be used consistent with the Fly America Act, and GSA has published its rules in the Federal Travel Regulation (FTR). Within the basic guidelines that GSA publishes, agencies must establish “internal procedures” to ensure that agency reimbursements with federal funds for travelers’ air carrier expenses are made only in compliance with the Fly America Act and the FTR rules. As a result, DTS places the traveler who purchases a ticket or the individual authorizing, certifying, or disbursing a payment made when a ticket is paid for directly by DOD—such as those tickets purchased using a centrally billed account—at unnecessary risk of personal liability. DOD guidance expressly states that for code-sharing airline tickets related to foreign travel (1) the entire airline ticket must be issued by and on the U.S.-flag carrier (not necessarily the carrier operating the aircraft) and (2) the flight must be between a centennial United States and a foreign destination. If these conditions are not met, DOD requires a determination that a U.S.-flag carrier is not available or use of a non-U.S.- flag carrier is necessary. These requirements are commonly referred to as the Fly America Act requirements. According to PMO-DTS officials, DTS’s requirements are intended to comply with the Fly America Act. However, our analysis of March 2006 flight display data identified several instances in which flights were displayed to the DOD traveler that did not meet the requirements of the Fly America Act. For example, six of the first seven flights displayed between Santiago, Chile, and San Antonio, Texas, did not appear to comply with the Fly America Act requirements since they did not involve a U.S.-flag carrier. More importantly, several flights that appeared later in the listing and involved U.S.-flag carriers were more advantageous to the traveler because they required less actual travel time. Figure 1 shows the DTS display of flights. According to DTS program officials, after our discussions relating to the flight displays and compliance with the Fly America Act, they did a “requirements scrub” to define the requirements that should be used to display flights, including those requirements relating to displaying flights that comply with the Fly America Act. The previous requirement stated that “DTS shall examine international trip records for compliance with DOD policy on the use of non-U.S.-flag carriers.” The revised requirement relating to international flights stated that the system should display flights that are (1) part of the GSA city pair program or (2) offered by U.S. carriers. If the system cannot find flights that meet these criteria, then the system is expected to instruct the user to contact their CTO to arrange the flight. According to PMO-DTS officials, this change has been incorporated into the production system. We conducted a limited nonstatistical test to determine if the examples of flights not complying with the Fly America Act identified in our earlier tests had been eliminated and found that these flights no longer appeared on the DTS displayed flights. However, as we noted, the DOD policy is compliant with the Fly America Act requirements and this was a DTS requirement in effect when we identified the examples of flight displays not complying with the Fly America Act. In effect, this is another example of (1) inadequate testing by the DTS contractor and (2) DOD’s inability to ensure the system is meeting its requirements. Until DOD effectively analyzes and properly documents the functionality it desires, it has little assurance that the proper requirements have been defined. While DOD’s planned actions, if effectively implemented, should address several of the specific weaknesses we identified related to flight displays and the Fly America Act, they fall short of addressing the fundamental problems that caused those weaknesses—inadequate requirements management. DTS’s requirements continue to be ambiguous. For example, a system requirement was changed to “display,” that is, show the fares relating to the full GSA city pair fare only if the GSA city pair fare with capacity limits was not available. Based upon information provided by PMO-DTS officials, after the requirement was supposed to have been implemented, both fare types were shown on the DTS display screen. PMO-DTS officials stated that although both fares were shown, DTS was still expected to book the lower fare and that the requirement was really designed to ensure that the lower fare was booked. This requirement is ambiguous because it is not clear what the word “display” means in this context. Based upon the stated requirement, the most common interpretation would be that the word display implies information that is provided (or shown) to the DOD traveler. However, based on the PMO- DTS official’s explanation, the word display, in fact, means the fare that is booked. This type of ambiguity was one cause of problems we noted in the past where testing did not identify system defects and DTS did not properly display the proper flight information to the user. Furthermore, DOD is currently undergoing a technology upgrade of DTS that is scheduled for completion by September 30, 2006. This technology upgrade is expected to provide additional functionality; however, DOD still has not adequately defined the requirements that are needed to define flight displays for DOD travelers. According to DTS program officials and the contractor responsible for the technology upgrade, the upgrade is intended to do the following: Replace the current display of up to 25 flights on one page in a predetermined order and separate the 25 flights into three categories— GSA city pair flights, Other Government Fares, and Other Unrestricted Flights—and then sort the flights by additional criteria such as elapsed travel time (rather than the current flight time), time difference from the requested departure time, number of stops, and whether the flight is considered a direct flight. This approach, if effectively implemented, addresses one problem we noted with the current process where flight time rather than elapsed travel time is used as one of the sorting criteria. It will also present flights that have the shortest duration in relation to the requested departure time at the top of the listing. Display the prices on all flights returned to the traveler. The current system displays the prices for the GSA city pair flights and allows the traveler to request prices for up to 10 additional flights at a time. This significantly improves the ability of the system to present information to the traveler that can be used to select the best flight for the government and allows the system to help ensure that the lowest cost flights are selected by the user. This is especially true when a GSA city pair fare is not available. According to DOD officials, it is cost prohibitive to obtain the pricing information for non-GSA city pair flights using the current technology. Although these planned improvements should provide the DOD traveler with better travel information, they still fall short of adequately defining the requirements that should be used for displaying flights. For example, DOD has retained a requirement to display 25 flights for each inquiry. However, it has not determined (1) whether the rationale for that requirement is valid and (2) under what conditions flights that are not part of the GSA city pair program should be displayed. For example, we found that several DTS flights displayed to the user “overlap” other flights. Properly validating the requirements would allow DOD to obtain reasonable assurance that its requirements properly define the functionality needed and the business rules necessary to properly implement that functionality. As previously noted, requirements that are unambiguous and consistent are fundamental to providing reasonable assurance that a system will provide the desired functionality. Until DOD improves DTS requirement management practices, it will not have this assurance. Overhauling the department’s antiquated travel management practices and systems has been a daunting challenge for DOD. While it was widely recognized that this was a task that needed to be accomplished and savings could result, the underlying assumptions in support of those savings are not based on reliable data and therefore it is questionable whether the anticipated savings will materialize. Even though the overall savings are questionable, the successful implementation of DTS is critical to reducing the number of stovepiped, duplicative travel systems throughout the department. We have reported on numerous occasions that reducing the number of business systems within DOD can translate into savings that can be used for other mission needs. Furthermore, the shift of DTS to BTA, which makes DTS an enterprisewide endeavor, should help in making DTS the standard integrated, end-to-end travel system for business travel. Management oversight is essential for this to become a reality. Equally important, however, will be the department’s ability to resolve the long-standing difficulties that DTS has encountered with its requirements management and system testing. Until these issues are resolved, more complete utilization of DTS will be problematic. To improve the department’s management and oversight of DTS, which has been declared a DOD enterprise business system, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Personnel and Readiness) and the Director, Business Transformation Agency, to jointly take the following four actions: Evaluate the cost effectiveness of the Navy continuing with the CTO management fee structure versus adopting the revised CTO fee structure, once the new contracts have been awarded. Develop a process by which the military services develop and use quantitative data from DTS and their individual legacy systems to clearly identify the total universe of DTS-eligible transactions on a monthly basis. At a minimum, these data should be used to update the DTS Voucher Analysis Model to report DTS actual utilization rates. Require the PMO-DTS to provide a periodic report on the utilization of DTS to the Under Secretary of Defense (Personnel and Readiness) and the Director, Business Transformation Agency, once accurate data are available. The report should continue until the department has reasonable assurance that DTS is operating as intended at all 11,000 locations. The report should identify at a minimum (1) the number of defense locations at which DTS has been deployed, (2) the extent of DTS utilization at these sites, (3) steps taken or to be taken by the department to improve DTS utilization, and (4) any continuing problems in the implementation and utilization of DTS. Resolve inconsistencies in DTS requirements, such as the 25 flight display, by properly defining the (1) functionality needed and (2) business rules necessary to properly implement the needed functionality. We received written comments on a draft of this report from the Under Secretary of Defense (Personnel and Readiness), which are reprinted in appendix II. DOD concurred with three and partially concurred with one of the recommendations. In regard to the recommendations with which the department concurred, it briefly outlined the actions it planned to take in addressing two of the three recommendations. For example, the department noted the difficulties in obtaining accurate utilization data from the existing legacy systems, but stated that the Office of the Under Secretary of Defense (Personnel and Readiness) and BTA will evaluate methods for reporting actual DTS utilization. Additionally, DOD noted that the Defense Travel Management Office developed and implemented a requirements change management process on May 1, 2006. In commenting on the report, the department stated that this process is intended to define requirements and track the entire life cycle of the requirements development process. As reiterated in this report, and discussed in our September 2005 testimony and January 2006 report, effective requirements management has been an ongoing concern, and we fully support the department’s efforts to improve its management oversight of DTS’s requirements. In this regard, the department needs to have in place a process that provides DOD reasonable assurance that (1) requirements are properly documented and (2) requirements are adequately tested as recommended in our January 2006 report. This process should apply to all existing requirements as well as any new requirements. As discussed in this report, we reviewed some of the requirements in May 2006, that were to have followed the new requirements management process, and found problems similar to those noted in our January 2006 report. While we did not specifically review the new process, if it does not include an evaluation of existing requirements, the department may continue to experience problems similar to those we previously identified. DOD partially concurred with our recommendation to evaluate the cost effectiveness of the Navy continuing with the CTO management fee structure. DOD stated that all military service secretaries should participate in an evaluation to determine the most cost-effective payment method to the CTOs. DOD’s response indicated that the Defense Travel Management Office is currently procuring commercial travel services for DOD worldwide in a manner that will ensure evaluation of cost effectiveness for all services. If DOD proceeds with the actions outlined in its comments, it will meet the intent of our recommendation. Finally, DOD strongly objected to our finding that the personnel savings are unrealistic. In its comments, the department stated that DOD is facing an enormous challenge and the department continues to identify efficiencies and eliminate redundancies to help leverage available funds. We fully recognize that the department is attempting to improve the efficiency and effectiveness of its business operations. In fact, the Comptroller General of the United States testified in August 2006 that increased commitment by the department to address DOD’s numerous challenges represents an improvement over past efforts. The fact remains, however, that the results of an economic analysis are intended to help management decide if future investments in a given endeavor are worthwhile. In order to provide management with this information it is imperative that the underlying assumptions in an economic analysis be supported by valid assumptions. The September 2003 economic analysis noted that personnel savings of $54.1 million, as shown in table 2 of this report, would be realized by the department annually for fiscal years 2009 through 2016. However, based upon our review and analysis of documentation and discussion with department personnel we found that the underlying assumptions in support of the $54.1 million were not valid. Furthermore, as noted in the report Air Force and Navy DTS program officials stated that they did not anticipate a reduction in the number of personnel with the full implementation of DTS. Further, as discussed in the report, the Naval Cost Analysis Division review of the DTS economic analysis noted that approximately 40 percent of the Navy’s total costs, including personnel costs, in the DTS life-cycle cost estimates could not be validated because credible supporting documentation was lacking. The report does note that Air Force and Navy DTS program officials noted that while they did not anticipate a reduction in the number of personnel, there would be a shifting of personnel to other functions. The report further points out that DOD officials responsible for reviewing economic analyses stated that while shifting personnel to other functions is considered a benefit, it should be considered an intangible benefit rather than tangible dollar savings since the shifting of personnel does not result in a reduction of DOD expenditures. Additionally, in its comments the department provided no new data that was counter to our finding. We are sending copies of this report to the Secretary of Defense; Under Secretary of Defense (Comptroller); the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Under Secretary of Defense (Personnel and Readiness); the Director, Business Transformation Agency; and the Director, Office of Management and Budget. Copies of this report will be made available to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact McCoy Williams at (202) 512-9095 or williamsm1@gao.gov or Keith A. Rhodes at (202) 512-6412 or rhodesk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To assess the reasonableness of the key assumptions made by DOD to arrive at the net annual estimated savings of over $56 million shown in the September 2003 economic analysis addendum, we (1) ascertained if the economic analysis was prepared in accordance with the prescribed standards, (2) analyzed two key assumptions that represent the largest dollar savings for the DTS program, and (3) analyzed the supporting documentation related to these two assumptions to determine whether the assumptions were valid. Furthermore, we met with the military services and DFAS officials to ascertain their specific concerns with the estimated savings. Further, we met with Program Analysis and Evaluation officials to identify any issues they had with the DTS estimated savings. In performing this body of work, we relied heavily upon the expertise of our Applied Research and Method’s Center for Economics. To determine the actions being taken to enhance the utilization of DTS, we met with military services officials to obtain an understanding of the specific actions that were being taken. In addition, we obtained and reviewed various memorandums related to the utilization of DTS. We also obtained an overview of the method and data used by the PMO-DTS to report the rate of DTS utilization for the various DOD components. We also met with the military services to ascertain how they use the PMO-DTS data to monitor their respective utilization and whether they augment these data with any other data and if so, the source of those data. To ascertain whether DOD has reasonable assurance that the testing of DTS was adequate, and thereby ensure accurate flight information was displayed, we met with Northrop Grumman and the PMO-DTS officials to obtain an explanation of the corrective actions that were to have been implemented. To ascertain if the noted corrective actions have been successfully implemented, we analyzed 246 GSA city pair flights to determine if the information being displayed to the traveler was consistent with DTS’s stated requirement. We did not review the accuracy and reliability of the specific dollar amounts shown in the September 2003 economic analysis. Given the department’s previously reported problems related to financial management, we have no assurance that the underlying data supporting the economic analysis were complete. Furthermore, our emphasis was directed more towards the validity of the assumptions that were used to arrive at the net annual estimated savings of over $56 million. We determined that the data were sufficiently reliable for the purpose of this audit. We performed our audit work from October 2005 through July 2006 in accordance with U.S. generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Defense or his designee. We received written comments from the Under Secretary of Defense (Personnel and Readiness), which are reprinted in appendix II. In addition to the above contacts, the following individuals made key contributions to this report: Darby Smith, Assistant Director; J. Christopher Martin, Senior-Level Technologist; F. Abe Dymond, Assistant General Counsel; Beatrice Alff; Harold Brumm, Jr.; Francine DelVecchio; Jason Kelly; and Tarunkant Mithani.
In 1995, the Department of Defense (DOD) began an effort to implement a standard departmentwide travel system. The Defense Travel System (DTS) is envisioned as DOD's standard end-to-end travel system. This report is a follow-up to GAO's January 2006, report which highlighted DTS implementation problems. Because of continued congressional interest in DTS, GAO initiated this follow-up audit under the Comptroller General's statutory authority. GAO determined whether (1) two key assumptions made in the September 2003 economic analysis were reasonable, (2) DOD is taking action to ensure full utilization of DTS and gathering the data needed to monitor DTS utilization, and (3) DOD has resolved the previously identified problems with DTS flight information. To address the above objectives, GAO (1) reviewed the September 2003 DTS economic analysis, (2) analyzed DTS utilization data, and (3) analyzed DTS flight information. GAO's analysis of the September 2003 DTS economic analysis found that the two key assumptions used to estimate annual net savings were not based on reliable information. Two cost components represent the majority of the over $56 million in estimated net savings--personnel savings and reduced commercial travel office (CTO) fees. In regard to the personnel savings, GAO's analysis found that the $24.2 million of personnel savings related to the Air Force and the Navy was not supported. Air Force and Navy DTS program officials stated that they did not anticipate a reduction in the number of personnel, but rather the shifting of staff from the travel function to other functions. The Naval Cost Analysis Division stated that the Navy will not realize any tangible personnel cost savings from the implementation of DTS. In regard to the CTO fees, the economic analysis assumed that 70 percent of all DTS airline tickets would either require no intervention or minimal intervention from the CTOs, resulting in an estimated annual net savings of $31 million. However, the sole support provided by the DTS program office was an article in a trade industry publication. The article was not based on information related to DTS, but rather on the experience of one private sector company. Furthermore, the economic analysis was not prepared in accordance with guidance prescribed by OMB and DOD. DOD guidance stated that the life-cycle cost estimates should be verified by an independent party, but this did not occur. The economic analysis did not undertake an assessment of the effects of the uncertainty inherent in the estimates of benefits and costs. Because an economic analysis uses estimates and assumptions, it is critical that the imprecision in both the underlying data and assumptions be understood. Such an assessment is referred to as a sensitivity analysis. DOD acknowledged that DTS is not being used to the fullest extent possible, but lacks comprehensive data to effectively monitor its utilization. DOD's utilization data are based on a model that was developed in calendar year 2003. However, the model has not been completely updated to reflect actual DTS usage. The lack of accurate utilization data hinders management's ability to monitor progress toward the DOD vision of DTS as the standard travel system. GAO also found that the military services have initiated actions that are aimed at increasing the utilization of DTS. Finally, GAO found that DTS still has not addressed the underlying problems associated with weak requirement management and system testing. While DOD has acted to address concerns GAO previously raised, GAO found that DTS's requirements are still ambiguous and conflicting. For example, DTS displaying up to 25 flights for each inquiry is questionable because it is unclear whether this is a valid requirement. Until DOD improves DTS's requirement management practices, the department will not have reasonable assurance that DTS can provide the intended functionality.
In 2006, the majority—59 percent—of the roughly 4,900 nonfederal, acute care general hospitals in the United States were nonprofit. The rest included government hospitals (25 percent) and for-profit hospitals (17 percent). States varied—generally by region of the country—in their percentages of nonprofit hospitals (see fig. 1). States in the Northeast and Midwest had relatively high concentrations of nonprofit hospitals, whereas the concentration was relatively low in the South. For example, 88 percent of Massachusetts’ hospitals were nonprofit, whereas only 32 percent of Texas’ hospitals were nonprofit. Among nonprofit hospitals we examined in California, Indiana, Massachusetts, and Texas, the average size of these hospitals, as measured by total operating expenses, varied (see table 1). For example, the average total operating expenses of nonprofit hospitals in Massachusetts were 98 percent higher than average total operating expenses of nonprofit hospitals in Indiana. Federal tax exemption for charitable organizations has been in existence since the beginning of federal income tax law. This exemption is based on the principle that the government’s loss of tax revenue is offset by its relief from financial burdens that it would otherwise have to meet with appropriations from public funds, and by the benefits resulting from the promotion of general welfare. Nonprofit hospitals have never been expressly categorized as tax-exempt organizations under section 501(c)(3) of the Internal Revenue Code. However, these hospitals are able to qualify for federal tax exemption under section 501(c)(3) of the Internal Revenue Code since IRS and courts have recognized the promotion of health for the benefit of the community—where medical assistance is afforded to the poor or where medical research is promoted—as a charitable purpose. Specifically, nonprofit hospitals must be organized and operated exclusively for the promotion of health, ensuring that no part of their net earnings inure to the benefit of any private individual, and may not participate in political campaigns on behalf of any candidate or conduct substantial lobbying activities. IRS has also issued revenue rulings specifying how nonprofit hospitals can meet the requirements of federal tax exemption. In a 1956 revenue ruling, IRS required tax-exempt hospitals to provide charity care to the extent of their financial abilities, which was known as the financial ability standard. However, through another revenue ruling in 1969, IRS established the community benefit standard, which modified the charity care-based financial ability standard as to how hospitals could qualify for tax-exempt status. The community benefit standard specified that nonprofit hospitals were not required to provide charity care to qualify for federal tax exemption, but they must provide a benefit to the community. Therefore, nonprofit hospitals could qualify for tax-exempt status so long as they benefited the community in a way that relieved a governmental burden and promoted general welfare, even if not every member of the community received a direct benefit. In the 1969 revenue ruling that established the community benefit standard, IRS recognized five factors that would support a nonprofit hospital’s tax-exempt status. These five factors were (1) the operation of an emergency room open to all members of the community without regard to ability to pay; (2) a governance board composed of community members; (3) the use of surplus revenue for facilities improvement, patient care, and medical training, education, and research; (4) the provision of inpatient hospital care for all persons in the community able to pay, including those covered by Medicare and Medicaid; and (5) an open medical staff with privileges available to all qualifying physicians. IRS further stated that tax-exempt status would be determined based on the facts and circumstances of each case, and that neither the absence of particular factors set forth in the 1969 revenue ruling nor the presence of other factors would be necessarily conclusive. Nonprofit hospitals that qualify for tax-exempt status are exempt from federal income taxation, have access to bond financing that generates tax- free interest earnings for the bondholder—allowing these hospitals to borrow funds at a lower cost than nonexempt entities—and are eligible to receive contributions that are tax deductible for the donors. In addition, these hospitals may also be exempt under state law from state and local income, property, and sales taxes, which in some cases are of a greater value than the federal income tax exemption. Once nonprofit hospitals have applied for and are granted tax-exempt status by IRS, they must file Form 990 with IRS on an annual basis. Form 990 collects information such as revenues and expenses, and program service accomplishments. In December 2007, IRS released a revised Form 990 to include a schedule specific to hospitals— Schedule H—that requires nonprofit hospitals to report their provision of activities that benefit the community in specified categories: charity care, bad debt, unreimbursed cost of government health care programs, and other activities that benefit the community. The new hospital schedule will be mandatory starting in filing year 2010 for tax year 2009, and IRS officials have stated that complete data from the schedule may not be available until 2011, at the earliest. In addition to meeting IRS’s community benefit reporting requirements, hospitals that participate in the Medicare program—including nonprofit hospitals—must file hospital cost reports with CMS. The required cost report includes Worksheet S-10, which collects revenue and cost information on Medicaid, state and local indigent care programs, the State Children’s Health Insurance Program, and other uncompensated care— defined by CMS as charity care and bad debt—provided by the hospitals. CMS, in consultation with the Medicare Payment Advisory Commission (MedPAC), is revising Worksheet S-10 as part of broader efforts to update the Medicare hospital cost report. Beyond these two federal requirements, some states also require hospitals to report their provision of community benefits using state-specific reporting instruments. In addition, when requested, some hospitals also report their community benefits to the state hospital associations or other trade organizations to which they belong. IRS’s community benefit standard to qualify for tax-exempt status allows nonprofit hospitals broad latitude to determine the services and activities that constitute community benefit. Furthermore, state community benefit requirements that hospitals must meet in order to qualify for state tax- exempt or nonprofit status vary substantially in scope and detail. IRS’s community benefit standard that hospitals must meet to qualify for federal tax exemption provides broad latitude to the hospitals in determining the nature and amount of the community benefits they provide. Specifically, IRS, in a 1969 revenue ruling that established the current community benefit standard, modified the existing tax-exemption requirement that focused primarily on the level of charity care that a hospital provided. This 1969 revenue ruling also listed the five factors that demonstrated how a nonprofit hospital could benefit the community in a way that relieved governmental burden and promoted general welfare. While IRS recognized these five factors as supportive of a nonprofit hospital’s tax-exempt status, it also stated that a nonprofit hospital seeking exemption need not meet all five factors to qualify for tax-exempt status; instead, the determination is based on all the facts and circumstances, and the absence of a particular factor may not necessarily be conclusive. As stated by the Commissioner of Internal Revenue, some of the five factors are now common practice in the hospital community and are less relevant in distinguishing tax-exempt hospitals from their for-profit counterparts. For example, having an open medical staff, participating in Medicare and Medicaid, and treating all emergency patients without regard to ability to pay are common features of both tax-exempt and for-profit hospitals. Although the focus of IRS policy is no longer the level of charity care that hospitals provide, the 1956 revenue ruling remains relevant, and IRS and various courts have continued to take into account the extent to which a hospital provides charity care when determining an organization’s tax- exempt status. For example, among the factors that the Tax Court and several United States Courts of Appeals have considered in determining whether an organization met IRS’s tax exemption requirements were existence of a charity care policy, provision of free or below-cost services to individuals financially unable to make the required payments, and provision of additional community benefit—other than making hospital services available to all in the community—that either further the function of government-funded institutions or would not likely be provided within the community without a hospital subsidy. State community benefit requirements that hospitals must meet in order to qualify for state tax-exempt or nonprofit status vary substantially in scope and detail. Specifically, 15 of the states have community benefit requirements in statutes or regulations and 36 do not (see fig. 2). Of the 15 states with requirements, 5 states—Alabama, Mississippi, Pennsylvania, Texas, and West Virginia—specify a minimum amount of community benefits required in order for hospitals to be compliant with state requirements. Another 4 of the 15 states—Illinois, Indiana, Maryland, and Texas—have penalties for hospitals that fail to comply with their community benefit requirements. Appendixes III, IV, V, VI, and VII contain more information on state community benefit requirements and other related provisions. In addition to the variation in scope among state community benefit requirements, the level of detail among such requirements also varies substantially. Specifically, of the 15 states with community benefit requirements, 10 states have detailed requirements and 5 states have less- detailed requirements. The community benefit requirements of the 10 detailed states typically include some combination of the following factors: a definition of community benefit, requirements for a community benefit plan that sets forth how the hospital will provide community benefits, community benefit reporting requirements, and penalties for noncompliance. For example, California requires its nonprofit hospitals to adopt and annually update a community benefit plan, and annually submit a description of community benefit activities provided and their economic values, among other things. Similarly, Illinois requires its hospitals to develop an organizational mission statement and a community benefits plan for serving the community’s health care needs, and to submit an annual report of its community benefits plan, including a disclosure of the amount and types of community benefits actually provided. These states also typically define community benefit using examples of, and guidance on, the types of activities considered to be community benefit. For example, Illinois defines community benefit using examples of activities that the state considers to be community benefit and Maryland defines community benefit using both examples and guidance. In contrast, the remaining five states with less-detailed requirements either only require the provision of charity care or do not provide guidance on what counts as community benefit. For example, Alabama’s requirement only provides that charity care must constitute at least 15 percent of a hospital’s business in order for the hospital to be exempt from property tax; and Wyoming’s requirement does not specify which activities its nonprofit hospitals must provide, but makes clear that hospitals must provide benefit to the community to obtain or maintain tax-exempt status. Variations in the activities nonprofit hospitals define as community benefit lead to substantial differences in the amount of community benefits they report. Among the government standards and industry guidance used by nonprofit hospitals, consensus exists to define many activities and their associated expenses—charity care, the unreimbursed cost of means-tested government programs, and many other activities that benefit the community—as community benefit. However, consensus does not exist to define bad debt and the unreimbursed cost of Medicare—each of which represents a substantial cost for nonprofit hospitals, according to the state data we analyzed—as community benefit. Activities that benefit the community and their associated expenses, as defined by the community benefit standards and guidance that nonprofit hospitals use, generally fall into one of four categories: charity care, care for patients whose accounts result in bad debt (referred to as bad debt for the rest of the report), care for beneficiaries of government health care programs and their associated unreimbursed costs, and other activities that benefit the community. In these standards and guidance, charity care is generally defined as care provided to patients whom the hospital deems unable to pay all or a portion of their bills. Bad debt is generally defined as the uncollectible payment that patients are expected to, but do not, pay. The unreimbursed cost of government health care programs is generally defined as the shortfall created when a facility receives total payments that are less than the total costs of caring for public program beneficiaries. Government health care programs include both means-tested programs for which eligibility is based on financial need, such as Medicaid, and non- means-tested programs for which eligibility is not based on financial need, such as Medicare. Lastly, other activities that benefit the community typically include activities that address a community need, and exclude activities that generate revenue for the hospital or are provided primarily for marketing purposes. These other activities generally fall into one of seven groups that the CHA and VHA guidance has identified, such as health professions education and medical research. Appendix II contains descriptions and examples of all seven groups. Consensus exists among the standards and guidance that nonprofit hospitals use to define charity care as community benefit. Specifically, among the five government and industry guidance documents we examined, four—IRS, AHA, CHA and VHA, and HFMA—define charity care as community benefit, as did all four state hospital associations we interviewed. While CMS does not have a position on community benefit, its reporting instrument collects information on uncompensated care and defines the term to include charity care. In addition, of the 15 states with community benefit requirements, 14 either explicitly define community benefit to include charity care or, in the absence of a definition, mention charity care as an example of community benefit. However, consensus does not exist among the standards and guidance that nonprofit hospitals use to define bad debt as community benefit. Among the five government and industry guidance documents we examined, two—CHA and VHA, and HFMA—specify that bad debt should not be defined as community benefit. CHA and VHA state that hospitals have the responsibility to better identify patients eligible for charity care, and thus distinguish charity care from bad debt. Citing the difficulty of obtaining appropriate documentation to determine charity care eligibility, HFMA, while it does not define bad debt as community benefit, has stated that hospital charity care policies should address how to determine eligibility when patients do not provide sufficient information to formally make a determination. In contrast, AHA defines bad debt as community benefit, as do three of the four state hospital associations we interviewed. AHA asserts that it should be defined as community benefit because the majority of bad debt is attributable to low-income patients who would qualify for charity care if hospitals were able to obtain the necessary documentation to formally make this determination. IRS, on the other hand, has not taken a position on whether to define bad debt as community benefit (see table 2). The agency recognizes the divergence of practices and views in this area and, as stated by its officials, would like more information on the amount of bad debt attributable to low-income patients. As a result, IRS’s community benefit reporting instrument—Form 990, Schedule H—will collect data on bad debt separately from the list of hospital activities that are traditionally included as community benefit, permit hospitals to explain why certain portions of bad debt should be defined as community benefit, and allow hospitals to estimate how much bad debt is attributable to low-income patients. CMS does not have a position on community benefit; however, its reporting instrument collects information on uncompensated care and defines the term to include bad debt. State community benefit requirements vary in whether they define bad debt as community benefit. Of the 15 states with community benefit requirements, 3 states explicitly include bad debt as community benefit, 2 states explicitly exclude bad debt, and 10 states do not specify. Whether nonprofit hospitals define bad debt as community benefit has an important effect on the resulting amount of community benefit reported. Specifically, nearly all of the nonprofit hospitals in the four states we examined reported bad debt, and the amounts were typically substantial when compared to charity care (see fig. 3). For example, in 2006 in California, the average percentage of total operating expenses devoted to bad debt was 7.4 percent—almost five times the average percentage devoted to charity care costs. Moreover, the amounts of hospitals’ bad debt varied widely across hospitals. For example, among nonprofit hospitals in Texas, which had the most variation, the middle 50 percent of hospitals reported bad debt ranging from 7.4 to 19.1 percent of total operating expenses in 2006. Among the middle 50 percent of nonprofit hospitals in Massachusetts, which had the least variation, the span was still notable with bad debt ranging from 2.2 to 4.6 percent of total operating expenses in 2006. Consensus exists among the standards and guidance nonprofit hospitals use to define the unreimbursed cost of means-tested government health care programs, such as Medicaid, as community benefit. Among the five government and industry guidance documents we examined, four—IRS, AHA, CHA and VHA, and HFMA—define the unreimbursed cost of such programs as community benefit, as did all four state hospital associations we interviewed. While CMS does not have a position on community benefit, its reporting instrument collects information on uncompensated care and includes the unreimbursed cost of such programs as a type of uncompensated care. In addition, state community benefit requirements generally include the unreimbursed cost of such programs as community benefit. Specifically, of the 15 states with community benefit requirements, 9 states explicitly include the unreimbursed cost of means-tested government health care programs as community benefit, none of the states explicitly exclude this cost, and 6 states do not specify. Consensus does not, however, exist to define the unreimbursed cost of Medicare as community benefit. Among the five government agencies and industry groups we examined, only the CHA and VHA guidance specifies that the unreimbursed cost of Medicare should not be defined as community benefit because, among other reasons, Medicare losses for some hospitals may be associated with inefficiency and not underpayment. CHA and VHA also note that all hospitals compete to attract Medicare beneficiaries, and CHA further stated that serving Medicare beneficiaries is not a differentiating feature of nonprofit hospitals. In contrast, AHA defines the unreimbursed cost of Medicare as community benefit, and HFMA states that hospitals should decide, based on their circumstances, whether the unreimbursed cost of Medicare should be defined as community benefit. AHA asserts that the unreimbursed cost of Medicare should be defined as community benefit because Medicare does not fully compensate hospitals for the cost of providing hospital care to Medicare beneficiaries. AHA also notes that Medicare, like Medicaid, serves a large number of low-income beneficiaries. HFMA states that the unreimbursed cost of Medicare can be an important issue for many providers and that such losses can be material to the facility’s financial status; therefore, each hospital should decide, based on its circumstances, whether to report these costs as community benefit. Similarly, all four state hospital associations we interviewed stated that they define the unreimbursed cost of Medicare as community benefit. IRS has not taken a position on whether to define the unreimbursed cost of Medicare as community benefit (see table 3). Its officials have stated that, similar to IRS’s position on bad debt, IRS’s community benefit reporting instrument will collect revenue and cost information related to hospitals’ Medicare beneficiaries separately from the list of hospital activities that are traditionally included as community benefit, and permit hospitals to explain why they believe all or a portion of these costs should be defined as community benefit. CMS, which does not have a position on community benefit, does not collect information on the unreimbursed cost of Medicare. State community benefit requirements vary in whether the unreimbursed cost of Medicare should be included as community benefit. Of the 15 states with community benefit requirements, 6 states explicitly include the unreimbursed cost of Medicare as community benefit, none of the states explicitly exclude this cost, and 9 states do not specify. Whether nonprofit hospitals define the unreimbursed cost of Medicare as community benefit has an important effect on the resulting amount of community benefit reported. Specifically, most of the nonprofit hospitals in the four states we examined—over 90 percent in Texas and over 80 percent in California, Indiana, and Massachusetts—reported having unreimbursed costs of Medicare, and the amounts were typically substantial compared to charity care costs and the unreimbursed cost of Medicaid (see fig. 4). For example, in all four states the unreimbursed cost of Medicare as a percentage of total operating expenses was at least 86 percent more than charity care costs as a percentage of the same expenses. Similarly, the unreimbursed cost of Medicare as a percentage of total operating expenses was at least 54 percent more than the unreimbursed cost of Medicaid as a percentage of the same expenses. Moreover, the amount of hospitals’ unreimbursed cost of Medicare varied widely across hospitals. For example, among nonprofit hospitals in Indiana, which had the most variation, the middle 50 percent of hospitals reported unreimbursed costs of Medicare ranging from 4.9 to 13.4 percent of total operating expenses in 2006. Among the middle 50 percent of nonprofit hospitals in Massachusetts, which had the least variation, the span was still notable with unreimbursed costs of Medicare ranging from 2.4 to 8.0 percent of total operating expenses in 2006. Consensus exists among the standards and guidance nonprofit hospitals use to define six of the seven groups of other activities as community benefit: cash and in-kind contributions, community benefit operations, community health improvement services, health professions education, medical research, and subsidized health services. State community benefit requirements on these activities vary. For example, 13 of the 15 states with community benefit requirements cite additional activities— other than charity care, bad debt, or government health care programs—as community benefit. For these states, the most commonly cited type of activity appears to be subsidized health services, although the exact term used varies among the states. In contrast, consensus does not exist to define the seventh group of activities—community-building activities—as community benefit. AHA, CHA and VHA, and HFMA define community-building activities as community benefit because these activities provide opportunities to address the underlying causes of health problems, such as poverty, homelessness, and environmental problems. IRS, however, has not taken a position on whether to define community-building activities, which include activities such as physical improvements and housing programs, economic development, and environmental improvements, as community benefit. The agency recognizes that there appears to be widespread support for including these activities, and while the agency believes that certain of these activities might constitute community benefit, more data and study are required. CMS also does not comment on what other activities should be defined as community benefit. While data are not available to evaluate the effect of defining community- building activities as community benefit, data on groups of other activities that benefit the community indicate that they represent a relatively small proportion of total operating expenses for hospitals. Only two of the four states we examined—Indiana and Texas—collect data on other activities that benefit the community, though even these states do not collect any data on two of the seven categories of other activities that benefit the community. For the five groups of other activities with data, fewer hospitals in Indiana and Texas generally reported having unreimbursed costs for these activities when compared with other types of community benefits, such as charity care, and the unreimbursed costs of most activities account for less than 1 percent each of total operating expenses, on average (see fig. 5). For example, more hospitals in these two states reported having unreimbursed costs for community health improvement services than for the other four groups—over two-thirds of Indiana nonprofit hospitals and almost three-quarters of Texas nonprofit hospitals reported having these costs. Among Texas and Indiana nonprofit hospitals, the unreimbursed costs of these services averaged only 0.6 percent in 2006. In contrast, few hospitals reported having unreimbursed costs for medical research—less than 15 percent of nonprofit hospitals in both states reported these costs. Among Indiana nonprofit hospitals reporting these costs, the unreimbursed costs of medical research averaged only 0.1 percent of total operating expenses in 2006. In Texas, however, these costs averaged 0.8 percent, and the top quarter of hospitals had unreimbursed costs at least twice the average—at 1.7 percent in 2006. In addition to representing a small proportion of total operating expenses, the costs of other activities that benefit the community are generally smaller than the costs of other types of activities that benefit the community, such as charity care, bad debt, and the unreimbursed costs of Medicaid and Medicare (see fig. 6). For example, among nonprofit hospitals in Texas that incurred costs for providing other community benefits, the average cost of these activities—at 11 percent—is the smallest of the different groups of community benefits. Nonprofit hospitals may use a variety of practices to measure the costs of community benefit activities, and differences in these practices can affect the amount of community benefits they report. For example, standards and guidance used by nonprofit hospitals specify a variety of levels at which hospitals can report their community benefit. Specifically, IRS requires hospitals to report community benefit on Form 990 by employer identification number (EIN) because tax exemption is determined by EIN. An EIN may cover a single hospital, several hospitals, or other aggregates. In contrast, CMS requires hospitals to submit cost reports, which include Worksheet S-10 with data on uncompensated care, at an individual hospital level. Industry stakeholders, such as AHA and CHA, have stated that hospitals should have the choice to report community benefits on a health care system level or as individual hospitals. CHA has stated that hospitals should have this option because, for example, they may also have established foundations or free health clinics as separate taxable entities through which they provide community benefit; hospitals should therefore have the option to include this community benefit in their reports. HFMA does not specify the level at which hospitals should report community benefit. The percentage of expenses devoted to community benefit could differ for hospitals that belong to a system depending on whether they reported at a system or individual level, because reporting at a system level aggregates the percentages of each hospital. One official from a state hospital association noted that because individual hospital percentages would be aggregated when community benefits are reported at a system level, there is a potential for a health care system as a whole, and not necessarily each individual hospital, to meet a community benefit standard. Data are not available that would allow us to evaluate the impact of differences in the level at which nonprofit hospitals report community benefit. IRS’s forthcoming Form 990, Schedule H, which will collect community benefit data, will be of limited use for comparing individual hospitals’ reported community benefits because, as noted, hospitals may report community benefit as a single hospital or a larger aggregate, such as a health care system. CMS’s Worksheet S-10 collects data on an individual hospital level, but we have found the data to be unreliable. MedPAC has stated that Worksheet S-10 should be improved, calling specifically for differentiating charity care and bad debt. Although Worksheet S-10 could yield reliable data in the future, it does not currently collect data on all the activities IRS includes as community benefit, such as medical research or subsidized health services. Standards and guidance used by nonprofit hospitals also differ in how they instruct hospitals to estimate costs of community benefit activities. Specifically, CHA and VHA and HFMA advocate calculating costs, if possible, using a cost-accounting system. However, one state hospital association we spoke with stated that smaller hospitals may not be able to use this method. In contrast, CMS instructs hospitals to estimate costs on Worksheet S-10 using a cost-to-charge ratio (CCR). CHA and VHA also suggest using a CCR when a cost-accounting system cannot be used. There are, however, many methods of calculating a CCR; CMS and CHA and VHA specify how hospitals should calculate the CCR used to determine charity care costs, but their formulas differ. AHA does not specify how to estimate costs, but supports the CHA and VHA guidance. IRS instructs hospitals to use a cost-accounting system, a CCR, or another cost-accounting method, whichever is most accurate in estimating costs. Data are not available that would allow us to evaluate the impact of the different practices hospitals use to estimate costs on the amount of reported community benefit. In addition to the different practices on reporting levels and methodologies for estimating costs, which affect every aspect of reported community benefit, standards and guidance used by nonprofit hospitals also specify a variety of practices to measure the costs of charity care, government health care programs, and other activities that benefit the community, which can lead to inconsistent reporting of these activities. Consensus does not exist on whether to add to charity care costs a nonprofit hospital’s contributions to uncompensated care pools or programs, or whether to offset charity care costs by payments to hospitals from uncompensated care pools or programs. AHA and CHA and VHA instruct hospitals to add their contributions and subtract the payments they receive to calculate charity care costs, but CMS and HFMA do not. IRS instructs hospitals to account for revenue from uncompensated care pools or programs as offsetting either charity care costs, the unreimbursed cost of Medicaid, or both, depending on the state’s primary purpose for the revenue. If the state’s primary purpose is unclear, IRS instructs hospitals to allocate portions of the revenue as offsetting either charity care costs or the unreimbursed cost of Medicaid, based on a reasonable estimate of the portions that are intended for charity care and Medicaid. Differences in how nonprofit hospitals calculate charity care costs can have an important effect on the resulting amount of community benefit a hospital reports. For nonprofit hospitals in Massachusetts in 2006, the average percentage of total operating expenses devoted to charity care would increase from 2.9 to 3.9 percent—a 34 percent increase—if hospital contributions to uncompensated care pools were added to charity care costs. If payments Massachusetts hospitals receive from uncompensated care pools are then subtracted from the sum, the average percentage of total operating expenses devoted to charity care would decrease from 3.9 to 1.8 percent, a 54 percent reduction. Consensus does not exist on how nonprofit hospitals are instructed to offset community benefit costs by Medicaid disproportionate share hospital (DSH) payments. CHA and VHA specify that hospitals can account for these payments as offsetting either charity care costs or the unreimbursed cost of Medicaid. IRS instructs hospitals to account for Medicaid DSH payments as offsetting either charity care costs, the unreimbursed cost of Medicaid, or both depending on the state’s primary purpose for the payment. If the state’s primary purpose is unclear, IRS instructs hospitals to allocate portions of the payments as offsetting either charity care costs or the unreimbursed cost of Medicaid based on a reasonable estimate of the portions that are intended for charity care and Medicaid. CMS does not specify whether these payments should offset any specific costs. AHA and HFMA do not specify whether to include these payments, but support the CHA and VHA guidance. Differences in how nonprofit hospitals calculate the unreimbursed cost of Medicaid can have an effect on the resulting amount of community benefit a hospital reports (see fig. 7). For example, in Texas, the unreimbursed cost of Medicaid (5.0 percent of total operating expenses) is 32 percent more than the unreimbursed cost of Medicaid net of DSH payments (3.8 percent of total operating expenses). In Massachusetts, however, the unreimbursed cost of Medicaid is the same as the unreimbursed cost of Medicaid net of DSH payments—1.9 percent of total operating expenses. Moreover, consensus does not exist on whether nonprofit hospitals should add provider taxes, which are used to match funds for federal Medicaid resources, to the unreimbursed cost of Medicaid. CHA and VHA instruct hospitals to include “Medicaid taxes” as a cost of Medicaid, describing these taxes as the provider fees that are used to match federal funds. In contrast, IRS instructs hospitals to account for these taxes as an element of charity care costs, the unreimbursed cost of Medicaid, or both, depending on the state’s primary purpose for payments to hospitals from an uncompensated care pool or Medicaid DSH program. HFMA officials stated that provider taxes for Medicaid should be defined as community benefit because they are assessed for a means-tested program. CMS does not specify whether to include these taxes. AHA does not specify whether to include these taxes either, but supports the CHA and VHA guidelines. State data we obtained did not contain information that would allow us to analyze the impact of including these taxes as part of the unreimbursed cost of Medicaid. While consensus exists to define most other activities as community benefit, the calculation of their costs using differing or nonexistent instructions may foster inconsistency. For example, the unreimbursed costs of subsidized health services may overlap with other reported community benefits. To account for this overlap, IRS, CHA and VHA, and HFMA specify that when reporting subsidized health services costs, hospitals should subtract the portion already counted as part of charity care costs and the unreimbursed costs of Medicaid. AHA does not specify whether these costs should be subtracted, but supports the CHA and VHA guidelines. CMS does not state which other activities it considers community benefit and therefore does not have guidance on measuring their costs. State data we obtained did not contain information that would allow us to analyze the effect of this overlap for measuring the cost of subsidized health services. Since we last reported on the provision of uncompensated care by hospitals in 2005, both policymakers and the hospital industry have devoted considerable time and effort to the issue of community benefit. In particular, distinguishing between charity care and bad debt—two expenses that have historically been considered together as uncompensated care due to the difficulty of obtaining documentation necessary to distinguish patients unable to pay from those unwilling to pay—has emerged as a key technical issue whose resolution will go far in harmonizing positions in the policy debate. With the added attention to community benefit has come a growing realization of the extent of variability among stakeholders in what should count and how to measure it. At the national level, in particular, there is substantial divergence of opinion on whether hospitals should be permitted to include bad debt and the unreimbursed cost of Medicare as community benefit. States vary considerably in the extent to which they have community benefit requirements, the nature of the requirements, and instructions on how to measure the components of community benefit. At present, determination and measurement of activities as community benefit for federal purposes are still largely matters of individual hospital discretion. Given the large number of uninsured individuals, and the critical role of hospitals in caring for them, it is important that federal and state policymakers and industry groups continue their discussion addressing the variability in defining and measuring community benefit activities. An encouraging prospect for the future is the potential availability of two national data sources derived from mandatory reporting to IRS and CMS. National data should be helpful in standardizing reporting on community benefit activities and informing public policy on the community benefit standard. However, the data from these two sources will not be available for analysis for several years, and it remains to be seen whether the data will be consistent and reliable. CMS and IRS reviewed a draft of this report. CMS stated that it did not have any comments. The director of the Exempt Organizations Division of IRS provided us with oral comments, which are summarized below. IRS stated that the report in general was accurate, although the agency noted that it did not review GAO’s analysis of state community benefit requirements for accuracy. IRS stated that the phrase “broad latitude to determine community benefit” overstates the looseness of the IRS standard and that such formulation is not supported by case law or published guidance. Specifically, IRS stated that the fact that hospitals may in practice exercise broad latitude does not make that the accepted IRS standard. In addition, IRS stated that the 1969 revenue ruling lists a specific set of factors, and court cases have closely followed the set of factors listed in that ruling. IRS stated that a correct characterization would be “some latitude” or “some flexibility,” citing Geisinger Health Plan v. Comm’r, 985 F.2d 1210, 1217 (3rd Cir. 1993). We believe that because the standard affords considerable discretion to hospitals in both the determination and measurement of activities that demonstrate community benefit for federal purposes, the IRS standard allows nonprofit hospitals broad latitude to determine community benefit. IRS commented that in the concluding observations section, the phrase “at present, determination and measurement of activities as community benefit for federal purposes are still largely matters of individual hospital discretion” was unclear as to whether the statement that follows “at present” refers to the state of things before or after IRS released the new Schedule H. IRS further stated that while in the years prior to IRS’s Form 990, Schedule H, the determination and measurement of community benefit was largely a matter of individual hospital discretion, the new Schedule H provides clear standards. Specifically these clear standards cover (1) the types of activities reportable or not reportable as community benefit; (2) the fact that community benefit must be reported at cost rather than charges or otherwise; (3) the fact that community benefit must be reported by EIN (not by hospital or by system); and (4) the fact that bad debt, the unreimbursed cost of Medicare, and community-building activities cannot be included in the Part I quantifiable community benefit table, although they are reported elsewhere on Schedule H and IRS allows hospitals to explain what they think should count as community benefit. IRS stated that, going forward with Schedule H reporting requirements, there will be very little or no discretion regarding these measurement points. IRS further stated that the area where Schedule H provides individual organizations discretion is in whether the organization estimates the cost of community benefit activities using a CCR, a cost- accounting system, or a blend, so long as it is the most accurate information the organization has available. We believe that while Schedule H provides guidance with respect to the types of activities reportable as community benefit, it does not provide clear guidance on whether these activities do or do not count as community benefit for purposes of complying with IRS’s community benefit standard. Schedule H indicates that bad debt, the unreimbursed cost of Medicare, and community-building activities cannot be included in the Part I quantifiable community benefit table; however, IRS has not clearly indicated whether it considers these items as counting toward meeting the community benefit requirement. IRS noted that because its Form 990, Schedule H, requires reporting of bad debt and the unreimbursed cost of Medicare separately from items identified as community benefit, it is misleading to include these two items in the list along with charity care following the phrase “activities that benefit the community” because the phrase sounds like “community benefit,” and Schedule H does not treat these items on par with Part I community benefit items such as charity care or unreimbursed cost of Medicaid. We agree with IRS’s concern and have modified our text to clarify this distinction. IRS stated that it would be an overstatement of the law to say uncategorically that a hospital need not meet all five factors to qualify for tax-exempt status. IRS suggested that “the determination is based on all the facts and circumstances, and the absence of a particular factor may not necessarily be determinative,” and cited the 1969 revenue ruling and IHC Health Plans, Inc. v. Comm’r, 325 F.3d 1188 (10th Cir. 2003). We agree with IRS’s concern and have modified our text accordingly. IRS also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At the time, we will send copies to the Acting Administrator of CMS, the Commissioner of Internal Revenue, and interested congressional committees. We will also provide copies to others on request. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-7114 or steinwalda@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. In conducting this study, we examined codified federal and state statutes and regulations. In addition, we analyzed state data on community benefits from California, Massachusetts, Indiana, and Texas. We selected these four states because they represent diverse areas geographically, and they collect data on nonprofit hospitals’ community benefits, which not all states maintain. We interviewed officials from the Internal Revenue Service (IRS) and the Centers for Medicare & Medicaid Services (CMS). We also interviewed representatives from the American Hospital Association (AHA); Association of American Medical Colleges; Catholic Health Association of the United States (CHA); Federation of American Hospitals; Healthcare Financial Management Association (HFMA); National Association of Children’s Hospitals; VHA, Inc.; and state hospital associations and state health officials from California, Indiana, Massachusetts, and Texas. In addition, we interviewed representatives from seven nonprofit health care systems, including health care systems in each of the four analyzed states that were referred to us by representatives from the state hospital associations. To determine the community benefit standards IRS has established, we examined relevant provisions of the Internal Revenue Code, IRS regulations, revenue rulings, and federal case law. To review states’ community benefit requirements, we defined “community benefit requirement” as a legal standard that expressly obligates a hospital to provide health care services or benefits to the community served by the hospital as a condition of maintaining tax-exempt status or qualifying as a nonprofit hospital. It is generally something that hospitals are required to do beyond their role of providing care for the sick and injured in exchange for remuneration or compensation. We considered the requirement to be one applicable to hospitals only if it either expressly referred to hospitals or expressly referred to care or services of the nature and type that one would reasonably expect to be provided by or performed primarily at acute care hospitals. We also limited our research concerning community benefit requirements to acute care, general hospitals. We looked only for codified state statutes and regulations that impose this type of requirement. If a statute or regulation described an activity that would fall into one of the commonly recognized “community benefit” categories identified by IRS, we considered it to present a community benefit activity. We searched only for state statutes or regulations that require hospitals to perform relevant activities in order to maintain tax exemption or nonprofit status. Thus, we excluded statutes and regulations that require hospitals to perform activities that benefit the community as a condition of obtaining hospital licensure, or that have the indirect effect of benefiting the community, such as state analogues to the Emergency Medical Treatment and Active Labor Act and state vaccination provisions. We excluded standards that are very general, such as Hawaii’s requirement that hospitals be “maintained to serve, and…do serve the public” in order to be exempt from property tax, although we did include requirements that specified that nonprofit hospitals do more than provide health care in exchange for compensation or remuneration. An example of the latter is Wyoming, which provides that “he fundamental basis for [exemption from ad valorem taxation] is the benefit conferred upon the public by schools, orphan asylums and hospitals, and the consequent relief, to some extent, of the burden upon the state to educate, care and advance the interests of its citizens.” We limited our search to codified state statutes and regulations. In performing our search of state codes and regulations, we used some search terms, namely “community benefit,” “charity care,” “gift to the community,” and “community service plan,” but we did not limit our list of states with community benefit requirements to states that use only these terms. We then searched selected parts of state codes and administrative codes, limiting our search to the subject areas of hospitals, public health, tax, and corporations, to find community benefit requirements that do not use readily searchable terms. If we found one provision in a state code or regulation that imposed a community benefit requirement, we did not continue searching that state’s authorities for additional or related provisions. Some state codes and regulations provided penalties for failing to comply with community benefit requirements. We noted penalty provisions only if the penalty provision made a direct and express reference to failure to comply with the community benefit requirement as the basis for the penalty. We did not include in our scope state statutes and regulations that address community benefits but do not amount to requirements. These states include those whose statutes explicitly state that having a community benefits program is voluntary (Connecticut) and those that require that hospitals report on the community benefits that they provide but do not actually require that they provide any community benefits (Connecticut, Georgia, Minnesota, Nevada, and Oregon). Although we did not include these states in our count, we noted them in the report. Due to our selection criteria, we included some states that organizations such as CHA, VHA, and Community Catalyst do not list in their compendia of states with community benefit laws, guidelines, and standards, and excluded some states that those organizations do include. We chose to use a broader definition of community benefit requirement, one that encompasses state statutes and regulations that may not use common community benefit terms, but nonetheless encompasses the same goals and types of activities as states that do use those terms. This reasoning led us to include Alabama, Colorado, Mississippi, North Dakota, and Wyoming. We excluded provisions dealing with hospital conversions, mergers, or sales. These provisions often require that hospitals going through one of these processes take steps to ensure that levels of community benefits are maintained or safeguarded. We feel that such provisions should not be included in a general compendium of state community benefit requirements. This means that we excluded some provisions that actually use the term “community benefit” and may even provide a detailed definition. We did this because such provisions apply in a limited context. They apply only to a limited number of hospitals (those that are going through conversion, merger, or sale), and they apply for a limited amount of time. We excluded provisions granting tax exemption by merely incorporating by reference the standard contained in section 501(c)(3) of the Internal Revenue Code and provisions that used section 501(c)(3)-like language restricting nonprofit hospital activities. However, we did include provisions that by their language incorporated the 501(c)(3) standard and had a reporting requirement. An example of the latter is Idaho, which grants property tax exemption only to hospitals that have received tax exemption from IRS pursuant to section 501(c)(3). In addition, Idaho hospitals granted tax exemption must annually submit a community benefits report. An example of the former is Arizona, which grants tax exemption to organizations that are exempt from federal income tax. To examine what activities are defined as community benefits among the standards and guidance used by nonprofit hospitals, we reviewed the standards and guidance of federal agencies and industry groups. To examine the effects of these standards and guidance on reported community benefit, we analyzed 2006 state data from California, Indiana, Massachusetts, and Texas. The state data were the most recent available at the time of our analysis. We limited our analysis to nonprofit, nongovernmental, acute care, general hospitals that reported gross patient revenues and total operating expenses. We calculated and compared a variety of hospital expenses, including charity care costs, bad debt, unreimbursed costs of government health care programs, and the costs of other activities that benefit the community, as percentages of total operating expenses. Charity care is generally defined as care provided to patients who the hospital deems unable to pay all or a portion of their bills. Bad debt is generally defined as the uncollectible payment that the patient is expected to, but does not, pay. The unreimbursed costs of government health care programs are generally defined as the shortfall created when a facility receives payments that are less than the costs of caring for public program beneficiaries. Other activities that benefit the community include health professions education and medical research. Not all of the four states we examined had data on all of these expenses; therefore, we calculated each expense as a percentage of total operating expenses whenever possible. We reduced charges to costs where possible in the data from all four states using cost-to-charge ratios. We did not reduce bad debt expenses because we found that hospitals did not consistently report bad debt in costs or charges. To examine practices nonprofit hospitals use to measure community benefit activities, we reviewed the standards and guidance from IRS, CMS, AHA, CHA and VHA, and HFMA. To examine the effects of these practices on reported community benefit, we analyzed 2006 state data from California, Indiana, Massachusetts, and Texas. We compared the different ways hospitals calculate expenses, including charity care costs and the unreimbursed cost of Medicaid, as percentages of total operating expenses. Not all of the four states had data to compare the different practices to measure all of these expenses; therefore, we calculated each expense as a percentage of total operating expenses whenever possible. We assessed the reliability of the state data from California, Indiana, Massachusetts, and Texas in two ways. First, we performed tests of data elements for all four states. For example, we examined the values for total operating expenses and gross patient revenues to determine whether these data were complete and reasonable. Second, we interviewed state officials knowledgeable about the data and reviewed documentation related to the data. We determined that all four states employed various data consistency checks, including outlier and trend analysis and targeted follow-up with hospitals on a case-by-case basis, to assess the quality of the data they collected. We determined that the data we used in our analyses were sufficiently reliable for our purposes. We conducted our work from July 2007 through August 2008 in accordance with generally accepted government auditing standards. Donations and grants provided to individuals or the community at large, and fundraising for community programs. charity events and individuals for emergency assistance. fees for sporting event tickets. Program, operating, and education grants; matching grants; and event sponsorship. involvement when activities are on employees’ own time and volunteer hours by employees on own time. space for nonprofit or community groups, emergency medical care at a community event, and provision of facility parking vouchers for patients and families in need. fundraising efforts specific to community programs. Community benefit strategic planning and operations. Volunteer time of individuals for community benefit volunteer programs. community benefit volunteer programs. assessment and community asset assessments, such as a youth asset survey. marketing survey process. Activities intended to enhance the development of community health programs and partnerships. neighborhood improvement, and revitalization projects. health facility construction and improvements, such as a meditation garden or parking lot. and participation in an economic development council or chamber of commerce. Routine financial investments. preparedness. and child care for community residents with qualified need. Interpreter training programs for hospital staff as required by law. efforts to reduce community environmental hazards. operations and financing. Training in conflict resolution, cultural skills, civics skills, or language skills, and community leadership development. Programs that address only the workforce needs of the health care organization rather than community-wide issues. community coalitions. Local, state, and national advocacy related to access to health care and public health issues. professionals for areas identified by the government as medically underserved. Programs for community health education, community-based clinical services, and health care support services. Prenatal and childbirth classes serving at-risk and low-income persons, public service announcements with health messages, support groups, and self-help programs. designed to increase market share, support given to patients and families in the course of their hospital visits, and employee wellness and health promotion provided as an employee benefit. occasionally held clinics, clinics for underinsured and uninsured persons, and mobile units used to deliver primary care services. community services and assistance with enrollment in government health care programs. for public relations or marketing, screenings and clinics for which a fee is charged and a profit is realized, volunteers’ time, and mobile units that provide specialty care that is an extension of the hospital’s outpatient department. and enrollment assistance services designed to increase facility revenue. Education for physicians, medical students, nurses, nursing students, and other health professionals, and scholarships and funding for professional education. Internships, clerkships, and residencies. Providing a clinical setting for restricted to members of the medical staff. undergraduate training or vocational training to students enrolled in an outside organization, and the costs of high school student job shadowing and mentoring projects. staff and staff time spent delivering care that is concurrent with job shadowing. provided as an employee benefit. payments for professional education to non-employees and volunteers. Clinical and community-health research to be shared with persons outside the organization. formal research protocols; studies on therapeutic protocols; evaluation of innovative treatments; and research papers prepared by staff for professional journals. used only internally. Studies on health issues for vulnerable persons, community health, and innovative health care delivery models. Clinical services provided despite a financial loss, even after removing the effects of charity care and unreimbursed cost of Medicaid. If no longer offered, these services would either be unavailable in the area or fall to the responsibility of government or another nonprofit organization. Subsidies provided to maintain the availability of these clinical services. Charity care, bad debt, and unreimbursed cost of Medicaid. Services provided in order to services, neonatal intensive care, burn units, women’s and children’s services, renal dialysis services, subsidized continuing care, behavioral health services, and palliative care. attract physicians or health plans. Routine pain control program. Activities that benefit the community, as defined by the standards and guidance used by nonprofit hospitals, generally fall into one of four categories: charity care, bad debt, unreimbursed costs of government health care programs, and other activities that benefit the community. As of March 2008, 15 states require that hospitals provide community benefits in order to receive tax exemption or achieve nonprofit status. However, state community benefit requirements vary greatly in scope and level of detail (see app. IV). Of the 15 states with community benefit requirements, 10 have detailed community benefit requirements. We considered states to provide a “detailed” definition if they provided some combination of the following: a definition of community benefit, requirements for a community benefits plan that sets forth how the hospital will provide community benefits, reporting requirements, and penalties for noncompliance. These states typically set forth a detailed definition of community benefit, specifying numerous categories of activities that qualify, and are consistent with the level of detail of community benefit definitions used by the Catholic Health Association of the United States and other similar entities (see app. II). Illinois, for example, includes the unreimbursed cost of providing charity care, language assistant services, government-sponsored indigent health care, donations, volunteer services, education, government- sponsored program services, research, subsidized health services, and collecting bad debts. Illinois specifically excludes the cost of paying taxes or other governmental assessments. Maryland defines community benefit as “an activity that is intended to address community needs and priorities primarily through disease prevention and improvement of health status, including…ealth services provided to vulnerable or underserved populations such as Medicaid, Medicare, or Maryland Children’s Health Program enrollees...inancial or in kind support of public health programs...onations of funds, property, or other resources that contribute to a community priority...ealth care cost containment activities; and...ealth education, screening, and prevention services.” These 10 states also tend to have very detailed instructions on how community benefits should be provided and reported. They may include a description of the required elements of and the process by which a hospital should compose its community benefits plan and the required elements to be provided in a hospital’s annual report to the relevant authority. A typical example is California, which requires each of its nonprofit hospitals to have a mission statement that requires that hospital’s policies to integrate and reflect the public interest in meeting its responsibilities as a nonprofit organization; complete a community needs assessment in consultation with community groups and government officials; update its community needs assessment every 3 years; adopt and annually update a community benefits plan for providing community benefits either alone or in conjunction with other entities; and annually submit its community benefits plan, including a description of the activities undertaken and the economic value of community benefits provided. The remaining five states with community benefit requirements have provisions that are less detailed. Alabama requires that charity care constitute at least 15 percent of a hospital’s business in order for it to be exempt from property tax. Wyoming provides that “he fundamental basis for is the benefit conferred upon the public by schools, orphan asylums and hospitals, and the consequent relief, to some extent, of the burden upon the state to educate, care and advance the interests of its citizens.” States such as Wyoming do not specify activities that their nonprofit hospitals must provide, but their provisions make clear that, in order to receive tax exemption or achieve nonprofit status, hospitals must provide benefit to the community. In contrast to the 10 detailed states, these 5 states typically either require the provision of a certain amount of charity care without mentioning other categories of community benefit or do not give guidance as to what counts as a community benefit. For the latter states, such as Wyoming, it is not always clear what types of community benefit activities would fulfill a hospital’s obligations. The remaining 36 states do not have community benefit requirements in codified statutes or regulations that hospitals must meet to qualify for tax- exempt or nonprofit status. Among these states are three groups of states that address community benefit in some way but do not have “community benefit requirements” as we define that term. Some states apply their community benefits provisions to all hospitals, such as in the context of hospital licensure, rather than to tax exemption or nonprofit status (see app. V). Examples of states that fall into this category are Massachusetts, New Mexico, and Rhode Island, and they require all hospitals, both for- profit and nonprofit, to provide some form of community benefits. A second group requires that hospitals periodically report to the relevant authority the community benefits that they provide but do not require that hospitals actually provide any community benefits (see app. VI). A third group discusses community benefit in sources other than codified statutes or regulations, such as attorney general guidelines or property tax exemption standards (see app. VII). One state, Utah, discusses community benefit in a set of standards of practice for property tax exemptions and through its case law. Although Massachusetts has a statute requiring community benefits for licensure purposes, the bulk of its community benefit discussion is found in a set of attorney general guidelines. We did not include these groups of states in our count of states with community benefit requirements, and we provide information on these states as examples rather than as the product of a comprehensive analysis of state sources. Hospitals may be penalized if they fail to comply with community benefit requirements. Of the 15 states with community benefit requirements, 4 have explicit penalties for failure to comply and 11 states do not specify a penalty. Examples of states with explicit penalties include Indiana, Maryland, and Texas, where civil penalties may be assessed against nonprofit hospitals that fail to submit their annual reports in a timely fashion. Of the 11 states that do not specify a penalty, if the requirement is tied to tax exemption, a nonprofit hospital could be denied tax exemption for a period of time. For states without community benefit requirements but with community benefit provisions tied to hospital licensure requirements, a hospital that has not complied with the community benefit provisions will not be licensed (or its license may be suspended or revoked). In addition, states may include explicit penalties for failure to comply with community benefit provisions tied to hospital licensure requirements. For example, in Rhode Island, a state that applies its community benefits provisions to all hospitals through licensure requirements, failure to comply with statewide standards for community benefits may result in criminal penalties: the Superior Court may, after notice and opportunity for a prompt and fair hearing, impose a prison term of up to 5 years for a person who knowingly violates or fails to comply with the requirements or willingly or knowingly gives false or incorrect information in connection with its licensure requirements. Most states do not specify a minimum quantity of community benefits that must be provided in order to satisfy requirements. Five states require that hospitals provide a specified amount of community benefit. Alabama requires that “o be exempt from ad valorem taxation, the treatment of charity patients must constitute at least 15 percent of the business of the hospital,” while Texas requires that its hospitals comply with one or more of three standards: a level reasonable in relation to community needs; at least 100 percent of its tax-exempt benefits, excluding federal income tax; or at least 5 percent of its net patient revenue (in which case charity care and government-sponsored indigent care must be at least 4 percent of net patient revenue). In other states, the required minimum quantity is not a specified dollar amount or percentage. For example, Mississippi requires that, to be exempt from property tax, hospitals must maintain at least one ward for charity patients. West Virginia requires that charitable hospitals provide free and below-cost necessary medical services in an amount determined by their boards of trustees consistent with their ability to do so. In addition to states that specify a minimum quantity of community benefits that must be provided in order to satisfy community benefit requirements, the remaining states—those without minimum quantity requirements and those without community benefit requirements as we define that term—tend to require the submission of community benefits plans, annual reports, or both to relevant state authorities. Even without an explicit requirement to provide community benefits, these provisions may bring a measure of accountability as to quantity, since relevant authorities have an opportunity to review hospital activities. An example of a state without a minimum quantity requirement is California, which provides that hospitals must annually report on the economic value of community benefits provided in furtherance of their community benefits plans. To be exempt from ad valorem taxation on property up to $75,000, the treatment of charity patients must constitute at least 15 percent of the business of the hospital. Charity. None specified. (Ala. Code § 40-9-1) None specified. assessment at least once every 3 years. costs of health care services. 2. Annually adopt and update a health promotion services. community benefits plan, including mechanisms to evaluate its effectiveness, measurable objectives, and community benefits categorized into a specified framework. Prevention service (screenings, immunizations, disease counseling, education). Adult day care and child care. 3. Annually submit the community education. benefits plan, including activities undertaken and economic value of community benefits provided. training. (Cal. Health & Safety Code §§ 127350, 127355) homebound. Sponsorship of free food, shelter, and clothing to the homeless. socioeconomically depressed areas. Financial or in-kind support of public health programs. Donations that contribute to a community priority. Health care cost containment. Enhancement of access to health care. Services offered without regard to financial return. Goods or services that help maintain a person’s health. (Explicitly not limited to this list of activities.) (Cal. Health & Safety Code § 127345) Property must be owned and used solely and exclusively for strictly charitable purposes. None specified. (Colo. Rev. Stat. Ann. § 39-3-108) number of persons by relieving their bodies from disease, by assisting them to establish themselves in life, or by erecting or maintaining public buildings or works, or otherwise lessening the burdens of government. (8 Colo. Code Regs. 1304-2) Charity care. None specified. organized as a nonprofit corporation in Idaho or another state and has received an exemption from taxation from IRS pursuant to § 501(c)(3) of the Internal Revenue Code. Bad debt. through government programs. below actual cost. 2. Exempt hospitals with at least 150 patient beds must prepare and file an annual community benefits report that itemizes the community benefits provided and indicates the process the hospital used to determine general community needs. Donated time, funds, subsidies, and in-kind services. Additions to capital. (Idaho Code § 63-602D) Charity care. Language assistant services. health care. Donations. The Attorney General may assess a late filing fee against a nonprofit hospital that fails to file the annual report. The fee must not exceed $100; the Attorney General may grant extensions for good cause. report is public information. Volunteer services. (210 Ill. Comp. Stat. 76/25) Education. community information. Other rights and remedies available to the state are retained. (210 Ill. Comp. Stat. 76/15, 76/20) services. (210 Ill. Comp. Stat. 76/30) Research. Subsidized health services. Bad debt. Does not include the cost of paying taxes or other governmental assessments. (210 Ill. Comp. Stat. 76/10) Charity care. health care. assessments in aid of community benefits plan. Donations. Education. The state department may assess a civil penalty against a nonprofit hospital that fails to submit its annual report. The penalty may not exceed $1,000 for each day a report is late. 4. Annual report of the community (Ind. Code § 16-21-9-8) services. Research. the annual report is public information. Subsidized health services. Does not include the cost of paying taxes or other governmental assessments. (Ind. Code §§ 16-21-9-4, -5, -6, -7) (Ind. Code § 16-21-9-1) 1. Identify community health care 2. Annual community benefits report, vulnerable or underserved populations, such as Medicaid, Medicare, or Maryland Children’s Health Program enrollees. which includes the hospital’s mission statement, a list and costs of each community benefit initiative, a description of efforts undertaken to evaluate the effectiveness of each initiative, and a description of gaps in availability of specialist providers to serve the uninsured. Financial or in-kind support of public health programs. Donations that contribute to a community priority. activities. For failure to file the community benefits report: civil penalty of $100 per day unless an extension is granted. The Health Services Cost Review Commission may refuse to grant a rate increase to any hospital that does not file a required report. Any substantially incomplete or inaccurate report may not be considered timely filed. Institutions may request reasonable extensions of time to file required reports. Health education, screening, and (Md. Code Ann., Health-Gen. § 19- 303) prevention services. (Md. Regs. Code tit. 10, § 37.01.03) (Md. Code Ann., Health-Gen. § 19- 303) Must maintain one or more charity wards for charity patients. Charity. None specified. (Miss. Code Ann. § 27-31-1) (Miss. Code Ann. § 27-31-1(f)) Charity care. None specified. benefits plan, which includes a mission statement, community needs assessment, community benefit activities expected to be undertaken or supported, community benefit activities undertaken in the previous year and a description of results or outcomes, means used to solicit community views, an evaluation of the plan’s effectiveness, an estimate of the cost of each activity expected, and a report on the unreimbursed cost of activities undertaken in the previous year. Financial or in-kind support of public health programs, including support of recommendations in any state health plan. promote or support a healthier community, enhanced access to health care or related services, health education and prevention activities, or services to a vulnerable population. and training of health care practitioners, including the pooling of funds with other providers. 2. Community needs assessment. 3. Make the community benefits (Explicitly not limited to the listed activities.) plan available to the public. (N.H. Rev. Stat. Ann. § 7.32-d) (N.H. Rev. Stat. Ann. §§ 7.32-e, -f, -g) Issue an organizational mission statement. None specified. needs. Charity care. Improving access to health care services by the underserved. (N.Y. Pub. Health Law § 2803-l) -demonstrate operational and financial commitment to meeting community health care needs, and -prepare and make available to the public a statement of the hospital’s financial resources and allocation to hospital purposes, including the provision of free or reduced charge services. 3. Annually prepare and make available to the public an implementation report. 4. File with the Commissioner of Health its mission statement, annual implementation report, and 3-year report. (N.Y. Pub. Health Law § 2803-l) To receive sales and use tax exemptions, must be organized and operated exclusively in providing services for the purposes of preventing and alleviating human illness and injury. Education. None specified. Research. Community service. Direct patient services, income (N.D. Cent. Code §§ 57-39.2-04, 57- 40.2-04) being derived solely from private donations with some exceptions of a minimal membership fee. (N.D. Cent. Code §§ 57-39.2-04, 57- 40.2-04) Charity care. None specified. Goods or services to individuals 2. Must donate or render eligible for government programs. gratuitously a substantial portion of its services. Donations to institutions of purely 3. Must benefit a substantial and public charity or government agencies. indefinite class of persons who are legitimate subjects of charity. 4. Must relieve the government of some of its burden. (10 Pa. Cons. Stat. § 375) services, including the difference between full cost and fee received for all goods or services provided, education and research programs, and unreimbursed costs of government programs, including Medicare and Medicaid, and unreimbursed community services. assistance. Cost of goods or services provided to individuals who are unable to pay, provided that reasonable and customary collection efforts have been made. Services to the public that directly or indirectly reduce dependence on government programs or relieve or lessen the burden borne by government for the advancement of social, moral, educational, or physical objectives. (10 Pa. Cons. Stat. § 375) Charity care. 2. Comply with all federal, state, and local government requirements for tax exemption in order to maintain such exemption. health care. Donations. Education. 3. Provide a specified minimum services. amount of community benefits. A nonprofit hospital that fails to make a report of the community benefits plan is subject to a civil penalty not exceeding $1,000 per day. No penalty may be assessed against a hospital until 10 business days have elapsed after written notification to the hospital of its failure to file a report. Research. the admission of financially indigent and medically indigent persons. Subsidized health services. Does not include the cost of paying taxes or other governmental assessments. (Tex. Health & Safety Code Ann. §§ 311.047, 311.048) 1. Subject to a civil penalty of not more than $1,000 for each day of noncompliance. 5. Organizational mission statement. 6. Community benefits plan. 7. Communitywide needs (Tex. Health & Safety Code Ann. § 311.042; 25 Tex. Admin. Code § 13.13) 2. If a nonprofit hospital/system assessments to develop the community benefits plan. 8. Annual report of the community benefits plan, including amount and types of community benefits provided. does not submit a report of the community benefits plan within the established reporting period, the Department of Health may institute the following procedures: A. Notify the entity that it is in (Tex. Health & Safety Code Ann. §§ 311.043, 311.044, 311.045) noncompliance with the Department of Health’s reporting requirements and that the Commissioner of Health may request that the Attorney General institute and conduct a suit in the name of the state to recover civil penalties if the hospital fails to submit the report to the Department of Health within 10 days of receipt of the letter. B. If the Department of Health does not receive the report of the community benefits plan from the nonresponding hospital within the specified time frame, the Commissioner of Health may notify the Attorney General in writing of the entity’s noncompliance. The Department of Health will send a copy to the hospital. (25 Tex. Admin. Code § 13.18) Charity care. None specified. and below-cost necessary medical services as determined by its board of trustees, consistent with other provisions, to those who are unable to pay. Activities that promote the health of the community and/or decrease the burdens of state, county, and municipal governments. 2. Charitable use (determined by an examination of several factors, including charity care, promotion of health, relief of burdens of government, and volunteer and community services). charges and payments received from Medicaid and similar governmental programs. Volunteer and community services. Public education programs. Donations. specified minimum criteria. Free, low-cost, or below-cost 4. Review charity care plan not less than every 2 years. health screenings and assessments. (W. Va. Code St. R. § 110-3-24) assistance/counseling. clinics. centers. Free or below-cost blood banking services. Free or below-cost assistance, material, equipment and training to emergency medical services and ambulance services. Disaster planning. and training. (W. Va. Code St. R. § 110-3-24) Activities included in the definition of community benefit 1. The fundamental basis for ad valorem tax exemption is the benefit conferred upon the public and the consequent relief, to some extent, of the burden upon the state to educate, care, and advance the interests of its citizens. Such institutions thus confer a benefit upon the general citizenry of the state and render an essential service for which they are relieved of certain burdens of taxation. Benefit conferred upon the public. None specified. Consequent relief of the burden upon the state. Indigent care. Promote health care. to the general public. (W.S. 1977 § 39-11-105; Wyo. R. & Regs. Rev Gen Ch. 14 § 10) Indigent care shall be afforded through admission to the institution based on the clinical judgment of the physician, not upon the patient’s financial ability or inability to pay. (Wyo. Stat. Ann. § 39-11-105; Wyo. R. & Regs. Rev Gen Ch. 14 § 10) Massachusetts Applicants for a license to establish or maintain an acute-care hospital must agree to maintain or increase the percentage of revenues allocated to free care and submit a plan for the provision of community benefits. Identification and provision of essential health services. None specified. care services. (Mass. Gen. Laws Ann. ch. 111, § 51G) (Mass. Gen. Laws Ann. ch. 111, § 51G) 1. Acute-care or general hospitals can be licensed only if they agree to provide emergency services and general health care to nonpaying patients and low-income reimbursed patients in the same proportion as the patients are treated in acute-care general hospitals in the local community. The annual cost of this care shall not exceed 5 percent of the hospital’s annual revenue. health care provided to nonpaying patients and low-income reimbursed patients. (N.M. Stat. Ann. § 24-1-5.8(C); N.M. Admin. Code tit. 7, § 7.2.8(D)) Failure to provide proportional services to nonpaying and low- income reimbursed patients in any year following licensure may result in the Department of Health’s imposition of one or more of the following penalties: 1. an approved plan of 2. These hospitals must annually report correction that remedies the failure through the additional provision of services in subsequent years, the cost of care for emergency and general health care to nonpaying and low-income reimbursed patients and the number of nonpaying and low- income reimbursed patients treated. 2. a civil monetary penalty not to exceed $500,000, 3. suspension or revocation of the hospital’s license, and (N.M. Stat. Ann. § 24-1-5.8(C); N.M. Admin. Code tit. 7, § 7.2.8) 4. referral to CMS for sanctions under the Medicare and Medicaid programs. (N.M. Admin. Code tit. 7, § 7.2.8(L)) Charity care. Uncompensated care. Bad debt. Medicaid shortfall. Programs, procedures, and protocols that meet the needs of the medically indigent. Linkages with community partners that focus on improving the health and well-being of community residents. 5. not encourage persons who cannot afford to pay to seek essential medical services from other providers; made available to the community, such as fitness programs, health screenings, or transportation services. If any person knowingly violates or fails to comply or willingly or knowingly gives false or incorrect information the Director of the Department of Health may, after notice and opportunity for a prompt and fair hearing, deny, suspend, or revoke a license, or may order the licensee to admit or provide health services to no additional persons to the facility or to take corrective action necessary to secure compliance under the act; or the Superior Court may, after notice and opportunity for a prompt and fair hearing, impose a fine of not more than $1,000,000 or impose a prison term of not more than 5 years. Public advocacy on behalf of community needs. Scientific or medical research, or (R.I. Gen. Laws § 23-17.14-30) educational activities. 7. must have a formal, Board-approved (R.I. Gen. Laws §§ 23-17-43, 23- 17.14-15; R.I. Code R. 14 090 007, 14 090 028) plan for the provision of community benefits. The plan shall be updated and Board-approved at least every 3 years. (R.I. Gen. Laws §§ 23-17-43, 23-17.14-15; R.I. Code R. 14 090 007, 14 090 028) If the Department of Health receives sufficient information indicating that a licensed hospital is not in compliance with this section, the Director of the Department of Health shall hold a hearing upon 10 days notice to the licensed hospital and shall issue in writing findings and appropriate penalties. (R.I. Gen. Laws § 23-17.14-15) Preventive care. has in place a community benefits program. If the hospital has chosen to have a community benefits program, the report shall include a number of specified elements. Programs that improve the health status for working families and populations at risk in the community. (Conn. Gen. Stat. Ann. § 19a-127k) (Conn. Gen. Stat. Ann. § 19a-27k) The Commissioner of Public Health may, after notice and opportunity for a hearing, impose a civil penalty on any hospital that fails to submit the required report by the specified date. Such penalty shall be not more than $50 a day for each day after the required submittal date that such report is not submitted. (Conn. Gen. Stat. Ann. § 19a- 127k(f)) Nonprofit hospitals must file an annual community benefit report disclosing the cost of indigent and charity care provided during the preceding year not later than 90 days after the close of the fiscal or calendar year. The report shall include a statement of the cost and type of indigent and charity care provided by the authority, including the number of indigent persons served, categorization of those persons by county of residence, as well as the cost of indigent and charity care provided by the authority, including the number of indigent persons served, categorization of those persons by county of residence, as well as the cost of indigent and charity care provided in dollars. Indigent care. None specified. Charity care. (Ga. Code Ann. § 31-7-90.1) (Ga. Code Ann. §§ 14-3-305; 31-7-90.1) Annual report of services provided to benefit the community. None specified. reduced fee to patients unable to pay. (Minn. Stat. §§ 144.698, 144.699) Teaching and research activities. Community care. under state health care programs. Research. Community health services. Financial and in-kind contributions. Community building activities. Community benefit operations. Activities included in the definition of community benefit Education. Subsidized services. Does not include bad debt and underpayment for Medicare services. (Minn. Stat. Ann. §§ 144.698, 144.699) Each hospital with at least 100 beds must file as required by the Director of the Department of Health and Human Services but at least annually the expenses incurred for providing community benefits, a statement of its policies and procedures for providing discounted services to persons without health insurance, and a statement of its policies regarding collection. None specified. provided by a hospital to a community to address the specific needs and concerns of that community. Services provided by a hospital to uninsured and underserved persons. Training programs for employees. Health care services provided in areas that have a critical shortage of such services. (Nev. Rev. Stat. § 449.490) (Nev. Rev. Stat. § 449.490) Within 90 days of filing a Medicare cost report, a hospital must submit a community benefit report to the Office for Oregon Health Policy and Research of the community benefits provided by the hospital. Charity care. Losses related to Medicaid, Medicare, State Children’s Health Insurance Program, or other publicly funded health care program shortfalls. (2007 Or. Laws 3290 (effective Jan. 1, 2008)) services. Research. Financial and in-kind contributions to the community. health in the community. Any health care facility that fails to comply may be subject to a civil penalty, not to exceed $500 per day of violation, determined by the severity of the violation. Civil penalties may be remitted or mitigated upon such terms and conditions as the Administrator of the Office for Oregon Health Policy and Research considers proper and consistent with the public health and safety. (2007 Or. Laws 3290 (effective Jan. 1, 2008)) (2007 Or. Laws 3290 (effective Jan. 1, 2008)) Activities included in the definition of community benefit Community health education. (The Attorney General’s Community Benefits Guidelines for Non-Profit Acute Care Hospitals at 1) Free preventive care or health screening services. Mobile health vans. None specified (program is voluntary). Home care consistent with the definition of net charity care. Medical and clinical education and research. Support for and participation in community-oriented training programs. Low- or negative-margin services offered in response to an identified community need. Violence-reduction education and counseling. Anti-smoking education. Substance abuse education, prevention, and treatment. Domestic violence reduction education and training. Early childhood wellness programs. Expanded prescription drug programs. Volunteer services. Net financial assistance to community health centers. Unfunded services ancillary to Medicaid or Medicare services. (The Attorney General’s Community Benefits Guidelines for Non-Profit Acute Care Hospitals at 10–11) “Gift to the community” standard for property tax exemption: the hospital must establish that its total gift to the community exceeds on an annual basis its property tax liability for that year. Indigent care. None specified. Community education and service, including research and professional education. Medical discounts, including unreimbursed care covered by Medicare, Medicaid, or other similar government entitlement programs. (Property Tax Exemptions Standards of Practice at 2-35) Donations of time. Donations of money. (Property Tax Exemptions Standards of Practice at 2-35–2-36) In addition to the contact named above, Jenny Grover and Thomas Walke, Assistant Directors; Joanna L. Hiatt; Xiaoyi Huang; Jessica T. Lee; Drew Long; Kevin Milne; and Lisa Motley made major contributions to this report.
Nonprofit hospitals qualify for federal tax exemption from the Internal Revenue Service (IRS) if they meet certain requirements. Since 1969, IRS has not specified that these hospitals have to provide charity care to meet these requirements, so long as they engage in activities that benefit the community. Many of these activities are intended to benefit the approximately 47 million uninsured individuals in the United States who need financial and other help to obtain medical care. Previous studies indicated that nonprofit hospitals may not be defining community benefit in a consistent and transparent manner that would enable policymakers to hold them accountable for providing benefits commensurate with their tax-exempt status. GAO was asked to examine (1) IRS's community benefit standard and the states' requirements, (2) guidelines nonprofit hospitals use to define the components of community benefit, and (3) guidelines nonprofit hospitals use to measure and report the components of community benefit. To address these objectives, GAO analyzed federal and state laws; the standards and guidance from federal agencies and industry groups; and 2006 data from California, Indiana, Massachusetts, and Texas. GAO also interviewed federal and state officials, and industry group representatives. IRS stated that the report in general was accurate, but noted several concerns regarding the description of the community benefit standard. CMS did not have any comments. IRS's community benefit standard allows nonprofit hospitals broad latitude to determine the services and activities that constitute community benefit. Furthermore, state community benefit requirements that hospitals must meet in order to qualify for state tax-exempt or nonprofit status vary substantially in scope and detail. For example, 15 states have community benefit requirements in statutes or regulations, and 10 of these states have detailed requirements. GAO found that among the standards and guidance used by nonprofit hospitals, consensus exists to define charity care, the unreimbursed cost of means-tested government health care programs (programs for which eligibility is based on financial need, such as Medicaid), and many other activities that benefit the community as community benefit. However, consensus does not exist to define bad debt (the amount that the patient is expected to, but does not, pay) and the unreimbursed cost of Medicare (the difference between a hospital's costs and its payment from Medicare) as community benefit. Variations in the activities nonprofit hospitals define as community benefit lead to substantial differences in the amount of community benefits they report. Even if nonprofit hospitals define the same activities as community benefit, they may measure the costs of these activities differently, which can lead to inconsistencies in reported community benefits. For example, standards and guidance vary on the level at which hospitals may report their community benefit (e.g., at an individual hospital level or a health care system level) and the method hospitals may use to estimate costs of community benefit activities. State data demonstrate that differences in how nonprofit hospitals measure charity care costs and the unreimbursed costs of government health care programs can affect the amount of community benefit they report. With the added attention to community benefit has come a growing realization of the extent of variability among stakeholders in what should count and how to measure it. At present, determination and measurement of activities as community benefit for federal purposes are still largely a matter of individual hospital discretion. Given the large number of uninsured individuals, and the critical role of hospitals in caring for them, it is important that federal and state policymakers and industry groups continue their discussion addressing the variability in defining and measuring community benefit activities.
The CDBG program has two basic funding streams (see fig. 1). After funds are set aside for purposes such as the Indian CDBG program and allocated to insular areas, the annual appropriation for CDBG formula funding is split so that 70 percent is allocated among eligible metropolitan cities and urban counties (entitlement communities) and 30 percent among the states to serve non-entitlement communities. Entitlement communities are (1) principal cities of metropolitan areas, (2) other metropolitan cities with populations of at least 50,000, and (3) qualified urban counties with populations of at least 200,000 (excluding the population of entitled cities). HUD distributes funds to entitlement communities and states using a dual formula system in which grants are calculated under two different weighted formulas and grant recipients receive the larger of the two amounts. The formulas consider factors such as population, poverty, housing overcrowding, the age of the housing, and any change in an area’s growth (growth lag) in comparison with other areas. HUD ensures that the total amount awarded is within the available appropriation by reducing the individual grants on a pro rata basis. Entitlement communities and states can have more than one agency administer parts of the CDBG program but one agency must be designated the “lead” (typically a Department of Community Development or similar entity) and single point of contact with HUD. The entitlement communities may carry out activities directly or in the case of urban counties, they may award funds to other units of local government to carry out activities on their behalf. In addition, entitlement communities may award funds to subrecipients to carry out agreed-upon activities. Entitlement communities are subject to very few requirements relating to distribution of their CDBG funds. As long as entitlement communities fund projects that are eligible and meet a national objective, submit required plans and reports, and follow their stated citizen participation plans, they have a large amount of discretion as to how and what they fund. Unlike entitlement communities, states must distribute funds directly to recipients, which again are local units of government (non-entitlement cities and counties). The states’ major responsibilities are to (1) formulate community development objectives, (2) decide how to distribute funds among non-entitlement communities, and (3) ensure that recipient communities comply with applicable state and federal laws and requirements. Grant recipients are limited to 26 eligible activities for CDBG funding. For reporting purposes, HUD classifies the activities into eight broad categories—acquisition, administration and planning, economic development, housing, public improvements, public services, repayments of section 108 loans, and “other” (includes nonprofit organization capacity building and assistance to institutions of higher learning). Recipients may use up to 20 percent of their annual grant plus program income on planning and administrative activities and up to 15 percent of their annual grant plus program income on public service activities. Additionally, the act requires that recipients certify that they will use at least 70 percent of their funds for activities that principally benefit low- and moderate-income people over 1, 2, or 3 years, as specified by the recipient. Generally, an activity is considered to principally benefit low- and moderate-income people if 51 percent or more of those benefiting from the activity are of low- or moderate-income. To receive its annual CDBG entitlement grant, a recipient must submit a 3 to 5-year consolidated plan, a comprehensive planning document and application for funding, to HUD for approval. This document identifies a recipient’s goals, which serve as the criteria against which HUD will evaluate a recipient’s performance annually. The consolidated plan must include a citizen participation plan for obtaining public input on local needs and priorities, informing the public about proposed activities to be funded, and obtaining public comments on performance reports. Annually, recipients must submit an action plan that identifies the activities they will undertake to meet the goals and objectives identified in their consolidated plan as well as an evaluation of past performance and a summary of the citizen participation process. Moreover, on an annual basis, recipients must submit a consolidated annual performance and evaluation report (CAPER) that compares proposed and actual outcomes for each goal and objective in the consolidated plan and, if applicable, explains why the recipient did not make progress in meeting the goals and objectives. Similar to entitlement communities, states must submit their consolidated plans, annual action plans, and performance and evaluation reports (PER). However, the states’ action plans must describe methods for distributing funds to local governments to meet the goals and objectives in their consolidated plans instead of a list of activities as provided by entitlement recipients. HUD’s Office of Community Planning and Development (CPD) administers the CDBG program through program offices at HUD headquarters and 43 field offices located throughout the United States. Staff in the headquarters offices set program policy, while staff in the 43 field offices monitor entitlement recipients directly and monitor the states’ oversight of non-entitlement recipients. A CPD director heads each unit in the field offices. CPD field staff are responsible for grant management activities that include annual review and approval of consolidated plans and action plans, review of annual performance reports, preparation and execution of grant agreements, closeout activities, and technical assistance. Reflecting the flexibility of the CDBG program, entitlement communities used various methods to distribute their funds. For example, most of the medium and large communities in our sample of 20 entitlement communities used competitive processes for a portion of their CDBG funds. These communities aligned award decisions with local priorities and, in some cases, elected officials and the budget process factored strongly in funding decisions. Some local officials told us they could adjust their funding priorities and practices from year to year as needed. To solicit public input and communicate processes and award decisions, all the communities in our sample met the program requirement to hold at least two public hearings, and most took additional steps, including holding multiple community meetings, forming citizen advisory committees, conducting needs assessment surveys, and making information available online. Most of the representatives of community organizations with whom we spoke thought the cities clearly communicated their distribution practices, although some thought that newer or less sophisticated applicants might have more difficulty obtaining funding. Our interviews with 20 entitlement communities demonstrated that the sample grantees took advantage of the CDBG program’s flexibility to distribute their funds in the ways that the grantees felt fit their local needs and circumstances. Most of the medium and large entitlement communities with which we spoke had some level of competitive process for distributing CDBG funds, especially for public services such as childcare, senior services, or employment training. Some communities required government agencies to apply through the same process as nonprofits and other subrecipients. In several communities that distributed funds competitively, the CDBG administrators issued a request for proposal (RFP) describing the local funding priorities, eligible uses for CDBG funds, and, in some cases, specific selection criteria and related points for meeting those criteria. Processes for evaluating applications varied, but commonly involved some level of review by local staff, sometimes followed or accompanied by a review by program experts or a citizen committee. In several communities, the mayor, the local governing board (such as the city council), or both had input and final approval authority for funding recommendations. For example, in Los Angeles, CDBG administrative staff conducted a high-level review of all applications to ensure they were eligible and met a national objective. The departments that administered individual categories of funds then ran their own RFP processes and assembled teams of subject experts and department staff to evaluate proposals. The mayor and city council also had the opportunity to review projects that the departments recommended for funding. The city council issued the final approval following a public hearing at which council members, community organizations, and members of the general public could provide input on the funding recommendations. (See app. II for information on the methods used by all of the entitlement communities in our sample.) A few of the small and medium-sized entitlement communities in our sample used large proportions of their CDBG funds for housing activities or public works projects, and they directly spent those monies through local governmental departments. For instance, Lincoln Park, Michigan, planned to spend more than 70 percent of its 2010 CDBG allocation on infrastructure projects such as streets and utilities, which the public services department would carry out or bid to private contractors. The Lincoln Park official we interviewed stated that capital improvement needs, such as streets and utilities projects, were the city’s highest priority for CDBG funds due to a lack of other funds to carry out such projects. South Gate, California planned to direct about 50 percent of its 2010 allocation to the parks department, based on the results of a series of public and city staff meetings at which citizens and city officials agreed that their parks were the highest-priority need. In South Gate and a few other communities, the government agencies still had to submit letters of request or formal applications. In cases in which smaller communities distributed funds to subrecipients, they often used less formal application and review procedures than some of the larger communities. For example, in Bismarck, North Dakota, which primarily distributed its funds to subrecipients, the CDBG administrator sent application packages to organizations on the city’s mailing list and reached out directly to organizations of which she knew that might have had a CDBG-eligible funding need. A committee of two city officials and a nonprofit official reviewed the applications but did not use a formal ranking system. Gloucester Township, New Jersey, awarded funds based on a discussion between the council and mayor on how proposals met overall township needs. Some of the smaller communities in our sample did not receive applications from many more organizations than they could afford to fund. For instance, Bismarck funded 16 of 22 applications for 2010 and Dover, Delaware, funded 6 of 9 applications. Officials from Gloucester Township and Deltona, Florida, stated that they rarely rejected applications. In a few entitlement communities, governmental departments that administered some funds in-house used first-come, first-served, or “rolling” application processes to make small awards for certain activities. For example, Cleveland’s community development department used such a process to award funds for housing rehabilitation and storefront renovation programs. Gloucester Township’s grants office planned to award several small home improvement loans to low- and moderate- income residents for abatement of code violations and emergency repairs. Applicants had to meet certain income thresholds to qualify for the no- interest loans. Two of the entitlement commnitie in osample prticipted in conorti with other locl entitlement commnitie to more effectively trget CDBG nd other HUD fnd, nd redce dminitrtive co. Specificlly, the city of Grem, Oregon, i memer of the Portlnd Conortim, which o inclde the city of Portlnd nd Mltnomh Conty. While the two citie nd the conty ech receive their own CDBG lloction nd ditribute the fnd throgh epte process, Grem officited tht the conortillow them to prioritize need on metropolitn or regionl level rther theptely for ech jridiction. Likewie, the city of Saasot, Florid, joined Saasot Conty nd the citie of North Port nd Venice in conortim thllow them to dminiter housing nd commnity development progr nd llocte rerce based on the entire conty’ need nd not just individual jridiction. According to the Saasot Conortim’ 2005-2010 conolidted pln, thi greement crete ndrd et of rle for housing nd commnity development progr for ll of the jridiction nd redce dminitrtive co y rnning progr throgh one centrl office. The three counties in our sample of entitlement communities used three different methods to distribute their CDBG funds, based on size and the capacity of their participating localities to administer the funds. For example, Los Angeles County, California, the country’s largest urban county with a population of about 10 million people, distributed its funds to participating cities and unincorporated areas by formula. In turn, the participating cities could distribute their funds through competition or other methods, while the county’s community development department worked with the county’s five district supervisors to identify projects for funding in the unincorporated areas. Dane County, Wisconsin, ran a competitive process for most of its funds, citing the importance of providing standardized treatment across municipal, nonprofit, and for- profit applicants because of the high level of competing needs in the county. Greenville County, South Carolina, contracted with a countywide redevelopment authority to administer the funds. The redevelopment authority used a formula to determine the award amounts for the five local municipalities and a large unincorporated area. However, it retained control over the distribution method for all the funds, with some awards made competitively to nonprofit subrecipients and housing developers and others bid to contractors. Officials from a majority of the entitlement communities in our sample noted that they based their funding priorities on various assessments of local needs, and these priorities influenced the selection of CDBG projects. They funded projects based on priorities identified in their consolidated plans, other strategy documents, or citizen input. For instance, South Gate, California’s consolidated planning process resulted in the city’s focus on funding its parks. In addition, an official from Greenville County stated that the majority of funding for the county’s unincorporated area went to “target neighborhoods,” which had developed master plans to meet their most pressing needs. In some of the larger entitlement communities we interviewed, elected officials’ priorities also factored strongly in the distribution process. Officials from Chicago, Detroit, and New York reported that their CDBG funding processes were integrated with their local budget processes. Therefore, CDBG spending priorities often were viewed in the context of mayoral and city council priorities, and the city’s current fiscal strength. Officials in Los Angeles stated that city council members sometimes made changes to proposed economic development or infrastructure projects based on their knowledge of what was needed in their communities. They added that council members occasionally tried to fund projects outside of the competitive process. In those cases, the administrative department reviewed the requests to ensure they met federal and city CDBG priorities. In addition, an official from Philadelphia stated that for 36 years, a consensus had existed between the mayor and city council that housing activities were the city’s highest CDBG priority. Some communities drew on the CDBG program’s flexibility and revised their funding priorities or practices to adapt to local circumstances. They told us that severe (or catastrophic) weather events, economic conditions, or internal reviews had caused them to change their funding priorities or distribution methods. For example, the Deltona CDBG administrator stated that the city was considering revising its consolidated plan to focus on repairing the damage from recent hurricanes and other storms. Los Angeles officials stated that the recession had caused them to focus more heavily on family self-sufficiency and foreclosure prevention. In addition, Los Angeles recently changed the way it funded public service activities in response to a 2008 report by the city controller that noted a duplication of efforts, service gaps, a lack of competition in procurement, and other problems with the city’s anti-gang strategy and related human services. Rather than giving grants to several individual organizations for separate projects, the city now funds the FamilySource Program, described in the RFP as “…an infrastructure for delivering coordinated, outcome-driven services to the most vulnerable city residents.” Los Angeles officials stated that this shift resulted in a denial of renewal funding for some organizations that were not connected to the city’s new continuum of care. As discussed above, many entitlement communities also awarded funds to subrecipients. In choosing which organizations to fund, some entitlement communities told us that they tended to fund the same subrecipients annually; this was seemingly due in large part to capacity considerations. An official in San Francisco stated that, for several years, the city’s pool of subrecipients had included a group of strong performers that received funding annually. At least half of the officials we interviewed reported that they considered applicants’ capacity to effectively carry out program activities and meet the administrative requirements that accompanied CDBG funding. Several communities considered past performance in evaluating applicants, thus tending to give an advantage to incumbent organizations. Officials in Los Angeles and San Francisco noted that smaller organizations with limited administrative infrastructure might find it challenging to meet CDBG application and administrative requirements. Officials told us that new organizations were able to obtain funding in some communities. For instance, officials in Bismarck, San Francisco, and six other communities specifically mentioned that they encouraged new applicants to apply or that they funded a small number of new organizations in addition to some repeat subrecipients. San Francisco officials stated that they encouraged smaller organizations to coordinate with other providers or to find a fiscal agent who could focus on administration of the funds, allowing the groups to focus on service delivery. All of the entitlement communities in our sample reported that they held at least two public hearings annually and some used citizen advisory committees, surveys, and other outreach methods to involve and inform the public about their distribution of CDBG funds. As previously discussed, all entitlement communities must have a citizen participation plan. As part of the citizen participation requirements, they must hold at least two annual public hearings. These hearings are intended to provide opportunities for citizens to voice concerns about how CDBG funds are distributed and help ensure that the process is transparent, among other purposes. Several of the entitlement community representatives with whom we spoke told us that attendance at public hearings generally was low. For instance, the Detroit administrator reported that only five or six people attended the public hearings. The CDBG administrator in Gloucester Township stated that the public hearings were poorly attended and no one questioned how the township spent its CDBG funds. This official equated a lack of comments or complaints to a lack of interest in the township’s funding process and local priorities and stated that it may be due in part to the small amount of funding available. Officials from Los Angeles and Dane County stated that attendance at hearings varied based on factors such as funding availability and the relevance and timeliness of the agenda items for local residents and community groups. In particular, a Dane County official noted that attendance was high for public meetings following recent flooding, when the county had a combination of CDBG disaster assistance funding and state funding available to assist flood victims. Some entitlement community representatives with whom we spoke told us that they took additional measures to involve local residents in the CDBG process and ensure transparency (see table 1). For example, several entitlement communities in our sample reported holding public meetings in addition to the two required public hearings to provide more opportunities for public involvement. For instance, Houston officials stated that they hosted a series of meetings with each major community and council district about proposed capital improvements, during which they addressed CDBG planning and processes for distributing funds. Los Angeles County officials stated that, for the unincorporated areas, they conducted five community meetings annually throughout the county and held them in the evenings so that residents would be more likely to attend. They reported that participation was higher using this method than when they used to hold just the two required public hearings. More than half of the entitlement communities with which we met had some form of citizen advisory committee to solicit and provide citizen input on local priorities. The committees typically comprised local residents who volunteered or were nominated or appointed by CDBG administrative staff, the mayor or other government executive, local council members, or community organizations. Because these committees often comprised representatives from a variety of business sectors and neighborhoods, they could provide a more knowledgeable perspective on the needs and circumstances of different communities and applicants than CDBG administrative staff alone could provide. A few of the committees also included one or more representatives of local government, such as the city council. Some communities also had the committees review CDBG applications and either recommend specific projects for funding or comment on proposed recommendations. For example, the Chicago Community Development Advisory Committee’s three subcommittees reviewed and commented on program criteria, reviewed subrecipient proposals, and made funding recommendations in collaboration with city staff. In San Francisco, the Citizens Committee on Community Development could comment on the CDBG-administering office’s initial recommendations before they went to the mayor and board of supervisors. Similarly, after an initial eligibility screening by city staff, the Citizen Advisory Committee in South Gate, California, reviewed and scored applications (based in part on applicants’ oral presentations to the committee) and passed on recommendations to the city council. Several entitlement communities in our sample also used needs assessment surveys or similar mass communications to gather input on local priorities. For instance, Dane County, Wisconsin, surveyed 1,500 county residents on its most recent consolidated plan and achieved a response rate of more than 30 percent, which allowed the county to obtain meaningful information on residents’ priorities. Cleveland CDBG officials stated that they found it most effective to reach out to community members where they gathered to discuss neighborhood issues. This included over 50 venues such as block club meetings, annual meetings of citizen organizations, and community festivals. At these events, staff members obtained citizen comments and used a short survey to capture citizen ideas. Officials in several other entitlement communities also reported using surveys to gain input on CDBG priorities, usually in connection with their consolidated or annual action plans. Similarly, a Chicago official stated that the city emailed more than 500 individuals and community-based organizations to inform them about a public hearing on unmet needs and community priorities. Throughout the application process, the entitlement communities we interviewed used methods such as the Internet, mass mailings, publishing scoring systems, workshops, and letters to communicate funding availability, requirements, and results to applicants and the public. Many entitlement communities published information about CDBG funds on their Web sites, making it publicly available to anyone with an Internet connection. In addition, officials from a few of the entitlement communities reported that they sent mailings to large lists of past and potential applicants to notify them about funding availability. For example, San Francisco officials told us that they sent a mailing to more than 900 community-based organizations about CDBG funding opportunities. The majority of entitlement communities in our sample provided clear information regarding their evaluation processes for applications. In particular, over half of the communities spelled out specific evaluation criteria, often with points assigned to those criteria, and they published this information in their applications (or RFP) or application guidelines, which they frequently made available online. Others used a more informal rating system based on general adherence to stated local priorities, national eligibility, or capacity to carry out the proposed project. Half of the entitlement communities in our sample also reported that they held workshops to explain the application process and answer questions. These workshops helped ensure that all applicants had the opportunity to receive consistent information about the funding process. Others provided technical assistance throughout the application process, or connected applicants with other organizations that could provide help. For example, the Dover administrator told us that the city’s public notice of funding availability included a telephone number for technical assistance. The Dane County Web site had a question and answer page about the process and its CDBG administrator stated that she had connected a municipal applicant with someone from another town that recently had completed a similar project with CDBG funds. Most of the entitlement community officials we interviewed told us that they sent notification letters to unsuccessful applicants at the end of the application process. Three entitlement communities in our sample— Bismarck, Dane County, and Detroit—made public a list of all applicants’ funding results. The other communities published a list of funded projects or activities, typically in their action plans, which they made publicly available in hard copy, if not on the Web. In most cases, when entitlement communities published a list of funding recommendations in their draft action plans, the public had an opportunity to comment on or challenge those recommendations before they were finalized for submission to HUD. Officials from Detroit and Los Angeles also noted tha they had a formal appeals process for unsuccessful applicants to contest funding results. t We spoke with nine representatives of community groups in Baltimore and Los Angeles, and most told us that, in general, they thought their localities’ CDBG funding processes were transparent. However, a representative in Baltimore stated that organizations that had not previously received funding had more difficulty obtaining funding than incumbent organizations. The Baltimore CDBG official with whom we spoke corroborated this point in stating that newer applicants tended to be at a disadvantage when compared to those with already successful programs. In addition, a representative in Los Angeles noted that the city’s process was more difficult to navigate for less sophisticated or well-connected organizations. Two community representatives said that they understood there were limited funds and that some proposals would not get funded. One nonprofit representative in Baltimore noted that the list of all awarded projects in the action plan illustrated the city’s many competing needs and helped him appreciate the difficulty the city faced in choosing which organizations would receive funding. States used program flexibility to distribute CDBG funds by varying combinations of three methods: competitive, open application, or formula, and are also required to describe methods in annual plans and consult with eligible non-entitlement community recipients in developing methods. Most states employed multiple distribution methods, with a majority using a combination of competitive and open application processes. The five states in our sample also used different processes to implement the three methods of distribution. According to non-entitlement community officials we interviewed, they generally found these processes to be transparent. To communicate methods of distribution and obtain applicant feedback, states in our sample generally used a combination of similar processes, including guidance and other documents, meetings, online resources, and intergovernmental organizations. States’ use of these methods went beyond general requirements, such as public hearings, and the applicants we interviewed generally viewed states’ efforts favorably. Our review of all 50 states’ annual action plans showed that states used program flexibility to implement a variety of methods to distribute funds, but most states used some combination of three methods: competitive, open application, or formula. States used competitive distribution methods to allocate funds to numerous types of CDBG-eligible activities, with awards determined by a variety of application criteria and evaluation methods. States’ competitive processes typically included one standard application deadline and ranked all eligible applicants in determining awards. States also used the open application distribution method to fund a variety of eligible activities that met certain threshold criteria as long as funds were available. Generally, with this process, states either did not establish an application deadline or the application submission period extended over several months. States’ open application processes sometimes rated projects, but did not necessarily rank them against each other. The formula distribution method used population and other factors to distribute CDBG funds to all eligible non-entitlement communities through a non-competitive process. From our review of all 50 states’ methods of distribution described in annual plans, we found that most states used a combination of the competitive and open application distribution methods to distribute funds to non-entitlement communities within defined CDBG-eligible categories, while a few states utilized a formula to distribute some funds. Four of the five states in our sample also used more than one method of distribution, but combinations varied (see table 2). Based on our review of action plans and on our interviews with officials from the five states, each allocated the majority of their CDBG funds under one method—two used competitive processes, two formulas, and one an open application process—and four used another method for the remainder. For example, Arizona distributed 85 percent of CDBG funds to non-entitlement communities through a formula based on population and poverty rates, but distributed the remaining 15 percent through a competitive process. Although states used different methods of distribution, all the states in our sample employed an application process by which units of local government applied for funding for each project. State officials reviewed applications to ensure projects were CDBG-eligible and met one of the program’s three national objectives. States also conducted additional monitoring to ensure projects matched application descriptions, including on-site reviews. For example, Georgia’s plan requires officials to conduct on-site visits during at least three stages of each CDBG project—prior to approving applications to ensure applications are accurate, prior to starting awarded projects to conduct a capacity assessment and review compliance requirements, and at least once after project work has been started to ensure continued compliance. The four states using competitive processes demonstrated similarities in applicant criteria and rating systems. Three of the four states that used competitive processes allowed all non-entitlement communities to apply for funds and used a point-based scoring system to evaluate applicants. For example, Georgia’s CDBG funds could be awarded to all eligible units of local government. Its competitive application process rated and ranked all applicants on a 500-point scale across nine different factors, such as demographics need, impact, and leverage of other resources. All four states used evaluation criteria that included benefits to low-income individuals, capacity to execute projects, and potential impact. When using open application or formula-based processes, states tended to vary more with respect to criteria used to assess applications or distribute awards. All three states that used an open application method to distribute some funds varied in some of the criteria, beyond meeting national objectives, they used to evaluate and approve projects. For example, South Dakota factored in leveraging from other funding sources and maximization of local resources to evaluate all projects, while Virginia used varying criteria across five separate open application programs that funded different categories of projects. Furthermore, the two states that mostly used a formula to distribute their program funds factored population in their calculations. Arizona also considered a poverty rate indicator and allowed recipients to decide by region whether to receive funding every year or to alternate funding with neighboring communities for up to 4 years in order to apply funds to higher-cost projects. Pennsylvania’s formula factored in the level of local government receiving funds, and provided varying base funding amounts to counties, cities, and other types of municipalities before factoring in population. In all five states we reviewed, the priorities states outlined in their action plans played a role in the evaluation and approval of CDBG projects across all methods of distribution. States used priorities to limit categories of eligible projects beyond broader national objectives, or to more specifically incorporate priorities as scoring components in some competitive distribution processes. For example, Arizona limited its formula grants, which represent 85 percent of its non-entitlement community CDBG allocation, to housing or community development projects, and also uses state priorities to limit activities funded by competitive grants. Two of the four states that used multiple methods of distribution also used priorities to segment eligible project categories by funding method. Virginia created priority categories, with separate applications and funding pools. For instance, its open application funded programs included community development, planning, and urgent needs, and its competitive grant priorities included housing and infrastructure projects. While non-entitlement community officials we interviewed in these states generally noted significant flexibility in selecting CDBG-eligible projects, state priorities for CDBG funds were a consideration in their selection of projects to present for funding. Local representatives in two states noted that their knowledge of state priorities and past experience with their states’ CDBG distribution processes significantly influenced the types of projects for which they submitted funding applications. For example, most non-entitlement communities in South Dakota used CDBG funds for water and sewer projects, which were identified as a highest-need priority by state officials and difficult to fund from other available sources. States have flexibility in setting funding priorities, but three states noted that CDBG-required consultation with non-entitlement communities and local hearings were factors in developing funding priorities and methods of distribution. For example, South Dakota officials attributed their prioritization of water and sewer projects as a response to local feedback. States must document these needs assessment processes for HUD’s review, which all officials confirmed occurs. Two of the states required recipient communities to develop a formal plan based on their local needs assessments. For instance, Georgia requires non-entitlement community plans to outline needs in areas such as housing, infrastructure, and quality of life. Three states conducted additional meetings beyond the one required public hearing. For example, Arizona officials conducted regional meetings in each of their Councils of Government to solicit community input. Each of our five sample states used a body similar to many entitlement communities’ citizen advisory committees in the needs assessment process, though these groups varied in format. For example, Virginia’s Planning District Commissions communicated with non- entitlement communities in their areas to identify needs and develop regional CDBG funding priorities. The state then used these priorities as part of the scoring process for competitive fund pools and also as evaluation criteria for open application grants. Other governing stakeholders also factored into state priorities and methods of distribution. According to state officials, Pennsylvania, for example, requires new legislation to make any changes to its formula distribution process, which officials noted would be very difficult to revise and has remained in place since 1984. In South Dakota, the governor’s office makes final approvals of all CDBG projects funded, after reviewing the recommendations of program officials. All states utilized one agency to lead administrative efforts in the CDBG distribution process, as required by program regulations, but each state also involved additional government offices and/or legislative bodies in the overall decision-making process. Unlike entitlement communities, officials in sample states generally noted that most of their non-entitlement communities rarely used nonprofit organizations and other subrecipients to execute CDBG-funded projects in their local communities. Since all funds flow directly to units of local government, these entities must contract with any subrecipient through a separate agreement. Two of the states in our sample indicated their CDBG programs most commonly funded infrastructure projects, and noted that local government agencies typically executed these projects. However, officials from three of our sample states noted that some of their non- entitlement communities used subrecipients. For instance, Georgia officials noted that they encouraged non-entitlement communities to use subrecipients when they possessed better capacity for a given project. Additionally, South Dakota officials noted that they have a process in place to review and approve all subrecipients used by non-entitlement communities. All five states in our sample noted that they communicated their methods of distribution to non-entitlement communities and the public through their required annual plans. States also made stakeholders aware of distribution methods in several other ways—for example, through additional publications, workshops, online support systems, and intergovernmental organizations (see table 3). Three states published additional documents focused specifically on the CDBG program that supplemented their annual and consolidated plans. For example, Virginia’s annual CDBG program design document provided details on program changes and eligible communities, and other information beyond basic plan requirements. Two states held workshops outlining application procedures and methods of distribution. For instance, Georgia’s annual applicant workshop informed local government representatives and other interested parties of CDBG application procedures, and allowed participants to ask questions and share information. Georgia also provided applicants and citizens with an online customer service management system that addressed questions on methods of distribution. Three states worked with intergovernmental organizations (regional bodies that represent multiple local governments) to communicate methods of distribution to non-entitlement communities. For instance, Arizona’s Councils of Government provided assistance with the application process and developed funding cycles for distribution of formula grants. Non- entitlement community officials we interviewed in all five states noted that their states’ distribution methods generally were transparent and communicated sufficiently. Sample states also used several communication methods to provide feedback to applicants on funding decisions, application deficiencies, and other CDBG-related information (see table 4). Each state provided letters or other documentation to applicants to inform them about funding decisions and amounts, follow-up procedures, and other relevant information. Three states also posted application information online. For example, Virginia’s CDBG funding press release is available online, and contains details on all applicants that received an award through their competitive process, including amounts and projects funded. States with a competitive process to distribute some CDBG funds used letters and online feedback to convey decisions for both funded and denied applicants, but generally did not make public details on unfunded applicants. For example, Georgia sends denied applicants a detailed letter containing their score, rank, and information on why the application lost points for various criteria and identified ways they could improve their application for the next funding cycle. Non-entitlement community officials we interviewed in Georgia indicated that they found this process useful and were able to improve on declined applications to gain funding awards in subsequent years. States that used open application and formula processes to distribute CDBG funds also provided applicant feedback. The states we interviewed also provided some technical support as part of their feedback process, with all five states willing to conduct meetings or telephone conversations with non-entitlement communities upon request. HUD staff from the 17 field offices that monitor the entitlement communities and states in our sample reported very few findings or concerns related to methods of distribution. HUD requires states to describe their methods of distribution in their annual action plans and HUD has authority to monitor methods of distribution as part of its audit and review responsibilities. In performing its reviews, HUD may check to determine whether the state has distributed its funds in conformance with the method of distribution described in its annual action plan. Entitlement communities have no requirement to describe methods of distribution. Instead, entitlement communities must provide a description of the activities they will undertake during the next year to address their priority needs and objectives. As we previously reported and we recently verified with several HUD field office staff, HUD uses a risk-based strategy to monitor CDBG recipients’ compliance with the program rules because it has limited monitoring resources. Field office staff rate all recipients on applicable factors under four categories: financial, management, satisfaction, and services and focus on high-risk recipients to review. HUD considers recipients that receive a score of 51 or greater to be high-risk. According to many field office staff with whom we spoke, they reported very few findings or concerns related to methods of distribution from their monitoring site visits for the CDBG program over the last few years. Our review of HUD’s monitoring reports confirmed that HUD staff reported very few findings or concerns related to methods of distribution. This is due in part to program design—states and entitlement communities decide which activities to fund and how to distribute funds. Because HUD monitors the program using risk analysis and because of the flexibility granted to entitlement communities and states to distribute funds, issues made about the choice of methods of distribution are not rated high-risk. HUD’s monitoring tends to focus on higher-risk areas such as ensuring funds are spent on eligible activities that meet one of the national objectives. For example, in April 2010 the Portland field office completed an on-site monitoring review of the City of Gresham’s CDBG program. According to the field office staff, the City of Gresham was deemed high- risk because it had not been reviewed since 2005 and the city had engaged in new activities since that time. According to the on-site monitoring letter summarizing the results of the review, HUD staff assessed three areas of the city’s program, including compliance with program eligibility and national objectives. HUD staff found that all of the CDBG activities were eligible for assistance, although one of seven projects they reviewed needed to be recategorized in HUD’s management information system. Furthermore, because states are required to describe their methods of distribution, many HUD staff told us they also monitored states’ conformance with the methods of distribution described in their action plans. For instance, in March 2008 the Atlanta field office reviewed the rating and ranking of applications for Georgia’s regular competition for CDBG awards and determined that the state’s system for reviewing applications had remained basically the same for at least 10 years. Furthermore, staff in one field office explained that for the small portion of funds distributed competitively by the state they oversee, they reviewed RFP and award processes to ensure that they mirrored the state’s planned approach. Staff in two other field offices noted that they looked at a sample of applications the state received to see whether the state rated and ranked them in accordance with its stated method of distribution. In addition to risk-based monitoring (on-site reviews), HUD staff conduct annual reviews of states’ and entitlement communities’ required annual performance reports. Many field office staff with whom we spoke said they compared the actual activities to those proposed in the annual action plans to ensure that entitlement communities and states were complying with the goals and objectives identified in their plans. For instance, the Los Angeles field office reviewed the Los Angeles County 2008 program year performance report and noted that the county reported activities and accomplishments that related back to strategies described in its consolidated plan. Also, the San Francisco field office reviewed Arizona’s 2007 program year performance report and concluded that the state undertook activities that addressed the state’s priority needs identified in its consolidated plan. In general, HUD field staff noted that the few problems identified during their reviews were administrative in nature and easily resolved. For instance, HUD staff at several field offices noted that a common problem with the consolidated and action plans, although not related to methods of distribution, related to the certifications that grantees had to submit to HUD. Grantees sometimes submitted outdated certification forms, failed to submit a renewal certification to replace the expired form on file, or submitted a certification without a signature. HUD staff also noted a few cases relating to requirements for states to describe their methods of distribution. In one instance, the state had changed the amount of funding dedicated to a certain type of project but did not revise its method of distribution description to reflect this change. In two other instances, HUD staff recommended that their respective states include sufficient information in their methods of distribution to meet HUD’s requirement relating to descriptions of all selection criteria. Officials from one state told us that they described the criteria and rating system they would use to evaluate applications in their RFP package but the description in their program design was not as detailed. Although this information was disclosed elsewhere, they explained that they began providing more details about their criteria in the method of distribution about 2 years ago. In the other instance, HUD staff told us that the state resolved the issue by referring HUD staff to additional information the state included in its separate method of distribution document. HUD also requires CDBG grantees to develop and follow a detailed plan that provides for, and encourages, citizen participation. The plan must provide citizens with reasonable and timely access to local meetings and provide for public hearings to obtain citizen views on proposals and answer their questions. Several of the HUD staff with whom we spoke described how they annually reviewed the citizen participation requirement. For instance, staff in one field office stated that they reviewed the citizen participation processes of the entitlement communities they oversee to determine whether the process was open and the community was reaching out to find eligible projects that met local needs. While processes varied by entitlement communities, HUD accepted variation so long as the citizen participation plan described how the process worked and citizens had an opportunity to participate. According to most field office staff with whom we spoke, grantees generally met the citizen participation requirement. For example, staff in one field office told us that one of the entitlement communities they oversee holds hearings in identified neighborhoods each year to obtain public input for its advisory committee. Throughout the citizen participation process, citizens have an opportunity to provide input on the consolidated planning process as well as discuss their concerns about the city or state’s distribution process. However, HUD field staff told us that citizens rarely used these venues to discuss issues related to how a city or state distributed its funds. Most of the discussions at these hearings focused on areas of need or on projects that did not receive funding. Citizens also can contact their local HUD office to express concerns about the CDBG program. Overall, several of the field office staff with whom we spoke stated that they had received a few complaints but none pertained to methods of distribution. Several field office staff told us that HUD investigates complaints they receive to ensure that grant recipients are in compliance with the program requirements. Local non-entitlement community officials with whom we spoke confirmed that they have not contacted HUD to express concerns about the states’ methods of distribution. The lack of concerns raised about methods of distribution through these venues corroborates HUD officials’ findings that methods of distribution are not a high-risk area for compliance with program requirements. We provided HUD with a draft of this report for their review and comment. HUD provided technical comments, which we incorporated into the report, as appropriate. We are sending copies of this report to the Secretary of the Department of Housing and Urban Development and other interested congressional committees. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are in appendix III. Our objectives were to (1) identify and describe examples of the various methods by which entitlement communities use and distribute their CDBG funds to individual projects within their jurisdiction; (2) identify and describe examples of the various methods by which states distribute CDBG funds to non-entitlement communities; and (3) describe and examine HUD’s role in overseeing the methods by which entitlement communities and states distribute their CDBG funds. To identify and describe the methods by which entitlement communities use and distribute their Community Development Block Grant (CDBG) funds to individual projects within their jurisdictions, we conducted a literature review and examined reports on the CDBG program and a report on managing CDBG grantees. We also interviewed CDBG experts and representatives of several national organizations that represent entitlement cities and counties and potential CDBG subrecipients to gather general information on how entitlement communities distribute their CDBG funds. These organizations included the National League of Cities, the National Association of Housing and Redevelopment Officials, the National Alliance of Community Economic Development Associations, the National Association for County, Community and Economic Development, and the National Community Development Association. We selected a sample of 20 entitlement communities for detailed interviews (see fig. 2). Ten of these communities (nine cities and one county) were the largest in terms of fiscal year 2010 CDBG allocations from the Department of Housing and Urban Development (HUD). To select the other 10 entitlement communities, we divided the list of 1,153 remaining entitlement communities into cities and counties. Within the city group, we divided the communities into four regions of the country (Northeast, Midwest/Central, South, and West). We then randomly drew 2 entitlement cities from each of the 4 regions and 2 counties from the overall list of counties—for a total sample of 17 entitlement cities and 3 entitlement counties, which matched the national distribution of cities to counties (approximately 85 percent and 15 percent respectively). Since we selected entitlement communities for comparative and illustrative purposes, results from this nongeneralizable sample cannot be used to make inferences about all entitlement communities nationwide. We interviewed the CDBG administrators from each of these entitlement communities to determine how they distributed their funds and how they shared this information with the public. We visited Baltimore, Chicago, Los Angeles, Los Angeles County, New York, San Francisco, and South Gate. We conducted the other interviews by telephone. We also reviewed these communities’ annual action plans and other relevant documentation. Lastly, we judgmentally selected three communities from the communities we visited, taking into account geographic and program diversity, and interviewed stakeholders involved in the CDBG process, such as community organizations and members of citizen advisory committees. In Baltimore and Los Angeles, we interviewed recent and current nonprofit CDBG subrecipients to discuss their understanding of their cities’ processes for distributing funds, as well as the transparency of those processes. In Chicago, we interviewed the executive members of the Community Development Advisory Committee, which comprises community members who provide input on CDBG funding priorities and recommendations to discuss their role in Chicago’s funding process and their views about how the process works. To identify and describe examples of the various methods by which states distribute CDBG funds to non-entitlement communities, we reviewed the most recently available annual action plans covering 2008 through 2010 (required and reviewed by HUD) for all 50 states to identify the types of methods of distribution. From this review, we judgmentally selected five states that represented a variety of distribution methods to conduct interviews (see fig. 2). In selecting states, we considered the distribution methods, geographic dispersion (at least one state from the Northeast, Midwest/Central, South, and West regions), funding amount, and states and regions represented by the 20 entitlement communities we reviewed. We interviewed the CDBG administrators for each state to obtain an understanding of their methods of distribution and the level of transparency in their process and reviewed relevant documentation. We also interviewed officials from two non-entitlement communities from each sample state to obtain their views on their respective state’s CDBG distribution process and how information is communicated to the public. Finally, we interviewed a representative from the Council of State Community Development Agencies to obtain general information on how its members distribute their CDBG funds. To describe and examine HUD’s role in overseeing the methods by which entitlement communities and states distribute their CDBG funds, we reviewed the relevant statutes, regulations, and HUD’s policies and procedures. In addition, we interviewed HUD staff from the 17 field offices that oversee our sampled entitlement communities and states to gain an understanding of their policies and practices relating to oversight of methods of distribution and to determine how they ensure that states complied with the requirement to publish their distribution methods. To confirm our understanding of HUD’s monitoring efforts and annual review of the performance and evaluation reports, we reviewed HUD documents summarizing the results of their reviews. We conducted this performance audit from November 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We interviewed the Community Development Block Grant (CDBG) administrators from each of 20 entitlement communities in our sample to determine how they distributed their funds (see table 5). We also reviewed these communities’ annual action plans and other relevant documentation. For more information about how we selected these communities, see appendix I. In addition to the contact named above, Kay Kuhlman, Assistant Director; Rudy Chatlos; Geoffrey King; Yola Lewis; Kristeen McLain; John McGrail; Lisa Reynolds; and Barbara Roesmann made key contributions to this report.
The Housing and Community Development Act of 1974 (act) creating the Community Development Block Grant (CDBG) program provides entitlement communities (metropolitan cities and urban counties) and states with significant discretion in how they distribute funds for eligible activities. Because of this discretion, entitlement communities may use a variety of processes to select individual projects and states may also use different methods to distribute funds to non-entitlement communities. GAO was asked to report on (1) the various methods by which entitlement communities use and distribute their CDBG funds to individual projects within their jurisdictions; (2) the various methods by which states distribute CDBG funds to non-entitlement communities; and (3) HUD's role in overseeing these methods. GAO interviewed CDBG administrators for 20 entitlement communities (the 10 largest by funding and 10 randomly selected) and 5 states (reflecting variety of methods used and geography) and reviewed documents related to their CDBG funding decisions, including the annual action plans for all 50 states. GAO also spoke with CDBG stakeholders, reviewed relevant statutes and regulations, interviewed HUD field office staff and reviewed monitoring documentation. Reflecting the program's flexibility, the 20 entitlement communities in GAO's sample distributed CDBG funds by various methods, but most used some level of competition in awarding funds. Distribution priorities and practices were based on various assessments of local needs, and in some communities, the funding decisions were also part of the local budget process. To communicate processes and award decisions to the public, all the communities in GAO's sample held at least two public hearings, more than half formed citizen advisory committees, and a few conducted needs assessment surveys, among other outreach methods. Sampled entitlement communities varied in the level of detailed criteria they used to evaluate applications, but they made the information available to potential applicants through published instructions, workshops, or the Internet. From a review of all 50 states' methods of distribution described in annual actions plans, GAO found that states used a formula, competition, open application, or a combination of methods to distribute funds to non-entitlement communities. Most states used a combination of competitive and open application processes. Whatever their method of distribution, the five states in GAO's sample evaluated applications to some degree against state priorities, which reflected a variety of needs assessments. States using some competitive distribution processes also incorporated their priorities into the scoring of applicants. All five states communicated their methods of distribution to non-entitlement communities and the public through their required annual plans and additional publications, workshops, and intergovernmental organizations. Of the non-entitlement community officials with whom GAO spoke in 10 localities, all agreed that their states clearly communicated their distribution process. HUD staff from 17 field offices (which monitor the entitlement communities and states in GAO's sample) reported very few findings or concerns related to methods of distribution. Staff told GAO that the lack of findings was due partly to program design (entitlement communities and states can choose distribution methods) and partly to HUD's risk-based monitoring. Because of the flexibility granted to entitlement communities and states, issues related to distribution methods are not rated high-risk. HUD has focused on higher-risk areas such as ensuring funds were spent on eligible activities. However, because states distribute funds to other government jurisdictions, they are required to describe their distribution methods in their plans. As part of its monitoring review, HUD staff check to ensure that the methods of distribution that state plans described were the methods used. Though few issues arose from the reviews, in a few cases HUD staff recommended that states enhance these descriptions. HUD staff also monitor grantees to ensure that public hearing and notice requirements have been met. Staff noted that none of the complaints to HUD offices had pertained to methods of distribution.
The Bureau’s mission is to collect and provide comprehensive data about the nation’s people and economy. Its core activities include conducting decennial, economic, and government censuses; conducting demographic and economic surveys; managing international demographic and socioeconomic databases; providing technical advisory services to foreign governments; and performing other activities such as producing official population estimates and projections. One of the Bureau’s most important functions and largest undertakings is conducting the Decennial Census, which is mandated by the Constitution and provides data that are vital to the nation. The information collected is used to apportion seats in the House of Representatives, realign the boundaries of legislative districts, and allocate billions of dollars in federal financial assistance. The Bureau is part of the Department of Commerce and is headed by a Director. It is organized into directorates corresponding to key programmatic and administrative functions, including the IT Directorate led by the Associate Director for IT and Chief Information Officer, and the 2020 Census Directorate, led by the Associate Director for Decennial Census Programs. The Bureau’s October 2015 2020 Census Operational Plan outlines 350 redesign decisions that the Bureau has either made or is planning to make largely by 2018. The Bureau has determined that about 51 percent of the design decisions are either IT-related or partially IT-related (84 IT- related and 94 partially IT-related) and the Bureau reported that, as of April 2016, it had made about 58 percent of these decisions (48 IT-related and 55 partially IT-related). (See fig. 1 below.) Examples of decisions that have been made include the following: Internet response—For the first time on a nationwide scale, the Bureau will allow individuals/households to respond to the census on the Internet from a computer, mobile device, or other devices that access the Internet. Non-ID processing with real-time address matching—The Bureau will provide each household with a unique ID by mail. However, users may also respond to the online survey without the unique ID by entering their address. This operation includes conducting real-time matching of respondent-provided addresses. Non-response follow-up—If a household does not respond to the census by a certain date, the Bureau will send out employees to visit the home. These enumerators will use a census application, on a mobile device provided by the Bureau, to capture the information given to them by the in-person interviews. The Bureau will also manage the case workload of these enumerators using an operational control system that automatically assigns, updates, and monitors cases during non-response follow-up. Administrative records—As we reported in October 2015, the Bureau is working on obtaining and using administrative records from other government agencies, state and local governments, and third- party organizations to reduce the workload of enumerators in their non-response follow-up work. For example, the Bureau plans to use administrative records to identify vacant housing units to remove from enumerators’ workloads, count households that did not return census questionnaires, predict best times to complete non-response follow- up, and help process responses it receives either on paper or over the Internet that do not have a census ID number on them (non-ID processing). Mobile devices—The Bureau plans to award a contract that would provide commercially available mobile phones and the accompanying service contract on behalf of the Census Bureau to enumerators, who will use these devices to collect census data. This approach is referred to as the device-as-a-service strategy. Cloud computing—The Bureau plans to use a hybrid cloud solution where it is feasible, and has decided it will use cloud services for the Internet response option as well as for non-ID processing with real- time address matching. Address canvassing—The Bureau has decided to reengineer its address canvassing process to reduce the need for employing field staff to walk every street in the nation in order to update its address list and maps. For example, the Bureau plans to first conduct in-office address canvassing using aerial imagery, administrative records, and commercial data before sending staff into the field. Figure 2 provides an overview of additional decisions and assumptions for the 2020 Census, resulting from the October 2015 operational plan. Examples of decisions that have not been finalized as of May 2016 include the following. Invalid return detection and non-ID response validation—The Bureau has not decided on its approach for identifying whether fraudulent returns have been submitted for the 2020 Census or the criteria and thresholds to decide whether further investigation may be needed, such as field follow-up. Solutions architecture—While the Bureau has established a notional solutions architecture for the 2020 Census, it has not decided on the final design. Internet response for island areas—The Bureau has not decided on the extent to which the Internet self-response option will be available for island area respondents. Additional uses of cloud—While Bureau officials have decided on select uses of cloud-based solutions, decisions remain on additional possible uses. For example, the Bureau is exploring whether it will use a cloud service provider to support a tool for assigning, controlling, tracking, and managing enumerators’ caseloads in the field. The Bureau’s redesign of the census relies on the acquisition and development of many new and modified systems. Several of the key systems are expected to be provided as CEDCAP enterprise systems under the purview of the IT Directorate. According to Bureau officials, the remaining systems (referred to as non-CEDCAP systems) are to be provided by the 2020 Census Directorate’s IT Division or other Bureau divisions. The 2020 Census Directorate established a Systems Engineering and Integration program office that is to serve as the technical arm of the 2020 Census program and is responsible for ensuring that all the system capabilities needed for the 2020 Census are developed and delivered, including integration of the CEDCAP and non- CEDCAP systems. The CEDCAP program is intended to provide data collection and processing solutions, which include systems, interfaces, platforms and environments, to support the Bureau’s entire survey life cycle, including survey design; instrument development; sample design and implementation; data collection; and data editing, imputation, and estimation. The program consists of 12 projects, which have the potential to offer numerous benefits to the Bureau’s survey programs, including the 2020 Census program, such as enabling an Internet response option; automating the assignment, controlling, and tracking of enumerator caseloads; and enabling a mobile data collection tool for field work. Eleven of these projects are intended to deliver one or more IT solutions. The twelfth project—IT Infrastructure Scale-Up—is not intended to deliver IT capabilities, solutions, or infrastructure; rather, it is expected to provide funding to the other relevant projects to acquire the necessary hardware and infrastructure to enable 2020 Census systems to scale to accommodate the volume of users. Table 1 describes the objectives of each project. The eleven projects are to provide functionality incrementally over the course of 13 product releases. The product releases are intended to support major tests and surveys at the Bureau through 2020. Of the 13 product releases, 7 are intended to support 6 remaining major tests the 2020 Census program is conducting as it prepares for the 2020 Census, as well as 2020 Census live production. The remaining 6 releases support the other surveys such as the ACS and Economic Census. Most recently, the CEDCAP program had been working on delivering the functionality needed for the third product release which was to support a major census test, referred to as the 2016 Census Test—conducted by the 2020 Census program to inform additional decennial design decisions. The 2018 Census End-to-End Test, noted below, is critical to testing all production-level systems and operations in a census-like environment to ensure readiness for the 2020 Census. The 2020 Census program plans to begin this test in August 2017. Figure 3 identifies which of the 13 CEDCAP product releases support the 2020 Census versus other surveys, as of May 2016. Each product release is decomposed into increments which follow a 40- day delivery schedule. Each of these 40-day increments focuses on a subset of functionality to deliver at the end of the increment. The Bureau determined that iterative deliveries of increments of functionality would best suit the CEDCAP program, instead of a “big bang” delivery approach for all solutions across all census and survey operations. Officials reported that such an approach would be impractical given the complexity of the program. To manage this complex program, the CEDCAP Program Management Office consists of three distinct functions: Program Governance, Planning, and Execution: This function is led by the Program Manager and is responsible for strategically leading, integrating, and managing the 12 projects. Business Operations: Led by the Assistant Chief of Business Operations, this function is responsible for project management, risk, issue, schedule, and budget activities. It focuses on the processes that are critical to ensuring seamless integration across projects. Technical Integration: This function is led by the Chief Program Architect, Chief Program Engineer, and Chief Security Engineer, and is responsible for supporting integration and execution of projects, participating in technical reviews, and establishing technical processes related to design, development, testing, integration, and IT security. In addition to the Program Management Office, the Bureau established the Office of Innovation and Implementation to articulate the overall business needs across the enterprise and ensure they are fulfilled by the capabilities to be delivered by CEDCAP. The Office of Innovation and Implementation is responsible for gathering and synthesizing business requirements across the Bureau, including for the 2020 Census Directorate, and delivering them to CEDCAP. Our prior work has identified the importance of having sound management processes in place to help the Bureau as it manages the multimillion dollar investments needed for its decennial census. For the last decennial, we issued multiple reports and testimonies from 2005 through 2010 on weaknesses in the Bureau’s acquisition, management, and testing of key 2010 census IT systems. For example, we reported on significant issues with the Census Bureau’s Field Data Collection Automation program, which was intended to develop custom handheld mobile devices to support field data collection for the census, including in-person follow-up with those who did not return their census questionnaires (nonresponse follow-up). However, as we testified in March 2008, the program was experiencing significant problems, including schedule delays and cost increases from changes in requirements. Due in part to these technology issues the Bureau was facing, we designated the 2010 Census a high-risk area in March 2008. In April 2008, the Bureau decided not to use the handheld devices for nonresponse follow-up. Dropping the use of handheld devices for nonresponse follow-up and replacing them with a paper-based system increased the cost of the census by up to $3 billion. Although the Bureau worked aggressively to improve the paper-based system that replaced the handheld computers, we reported in December 2010 that the paper- based system also experienced significant issues when it was put in operation. Since the 2010 Census, we have issued additional reports and testimonies on weaknesses in the Bureau’s efforts to institutionalize IT and program management controls for the 2020 Census. Relevant reports include the following: In September 2012, we reported that the Bureau had taken steps to draft new processes to improve its ability to manage IT investments and system development, and to improve its IT workforce planning. However, we found that additional work was needed to ensure that these processes were effective and successfully implemented across the Bureau, such as finalizing plans for implementing its new investment management and systems development processes across the Bureau, conducting an IT skills assessment and gap analysis, and establishing a process for directorates to coordinate on IT workforce planning. The Bureau has fully addressed our recommendations to address these weaknesses by, for example, finalizing its investment management process, conducting an enterprise-wide IT competency assessment and gap analysis, and developing action plans to address the identified gaps. As we reported in November 2013, the Bureau was not producing reliable schedules for two efforts related to the 2020 Census: (1) building a master address file and (2) 2020 Census research and testing. For example, the Bureau did not include all activities and required resources in its schedules, or logically link a number of the activities in a sequence. We recommended that the Bureau take actions to improve the reliability of its schedules, including ensuring that all relevant activities are included in the schedules, complete scheduling logic is in place, and a quantitative risk assessment is conducted. We also recommended that the Bureau undertake a robust workforce planning effort to identify and address gaps in scheduling skills for staff that work on schedules. The Bureau has taken steps to implement these recommendations, but has not fully implemented them. In April 2014 and February 2015 we reported on the Bureau’s lack of prioritization of IT decisions related to the 2020 Census. Specifically, in April 2014, we reported that the Bureau had not prioritized key IT research and testing needed for its 2020 Census design decisions. Accordingly, we recommended that the Bureau prioritize its IT-related research and testing projects. The Bureau had taken steps to address this recommendation, such as releasing a plan in September 2014 that identified research questions intended to inform the 2020 Census operational design decisions. In February 2015, however, we reported that the Bureau had not determined how key IT research questions that were identified in the September 2014 plan would be answered— such as the expected rate of respondents using its Internet response option or the IT infrastructure that would be needed to support this option. We recommended that the Bureau, among other things, develop methodologies and plans for answering key IT-related research questions in time to inform design decisions. The Bureau has taken steps to implement the recommendations, such as releasing a preliminary 2020 Census Operational Plan that documents many key IT-related decisions; however, many other IT-related questions, including the ones that were identified in our report, are to remain unanswered until 2016 through 2018. As a result of the Bureau’s challenges in key IT internal controls and its looming deadline, we identified CEDCAP as an IT investment in need of attention in our February 2015 High-Risk report. Further, we testified in November 2015, that key IT decisions needed to be made soon because the Bureau was less than 2 years away from end-to-end testing of all systems and operations to ensure readiness for the 2020 Census and there was limited time to implement it. We emphasized that that Bureau had deferred key IT- related decisions, and that it was running out of time to develop, acquire, and implement the systems it will need to deliver the redesign and achieve its projected $5.2 billion in cost savings. In addition, we stated that while the Bureau had made improvements in some key IT management areas, it still faced challenges in the areas of workforce planning and information security because it had yet to fill key positions—most concerning was the lack of a permanent chief information officer. We have also reported extensively on the increasing security risks facing federal agencies’ systems and data. These risks were recently illustrated by the data breaches at the Office of Personnel Management, which affected millions of current and former federal employees. As we have reported, protecting the information systems and the information that resides on them and effectively responding to cyber-incidents is critical to federal agencies because the unauthorized disclosure, alteration, and destruction of the information on those systems can result in great harm to those involved. Since 1997, we have designated federal information security as a government-wide high-risk area. In the February 2015 update to our high- risk list, we further expanded this area to include protecting the privacy of personally identifiable information that is collected, maintained, and shared by both federal and nonfederal entities. The data collected from the Census Bureau, or shared with the Bureau from other federal agencies, contains personally identifiable information and is protected by federal law. The wrongful disclosure of confidential census information could lead to criminal penalties. The 12 CEDCAP projects are at varying stages of planning and design. Nine of the projects began when the program was initiated in October 2014, two of the projects began later in June 2015, and the twelfth project—IT Infrastructure Scale-Up—has not yet started. The 11 ongoing projects have efforts under way to deliver 17 solutions, which are in different phases of planning and design. For 8 of the 17 solutions, the Bureau recently completed an analysis of alternatives to determine whether it will acquire commercial-off-the- shelf (COTS) solutions or whether they will be built in-house in order to deliver the needed capabilities. On May 25, 2016, the Bureau issued a memorandum documenting its decision to acquire the capabilities using a COTS product. The memorandum also described the process used to select the commercial vendor. Prior to this decision, the Bureau had developed several pilot systems to provide functionality to support the ongoing survey tests, such as the 2016 Census test. For example, the Survey (and Listing) Interview Operational Control project has been developing a pilot system referred to as Mojo that serves as an operational control system for field operations that assigns, controls, tracks, and manages cases. The Mojo pilot was used in the field in the 2015 Census Test and the 2016 Census Test. For the remaining 9 IT solutions, the Bureau has identified the sourcing approach (e.g., buy, build, or use/modify existing system) and has either identified the solution to be implemented or is in the process of evaluating potential solutions. For example, the Electronic Correspondence Portal project is working on combining an existing government-off-the-shelf product with an existing COTS product. According to program officials, these projects are expected to deliver their final production solutions in support of the 2020 Census from March 2019 to March 2020, except for the Centralized Development and Test Environment project, which has not yet determined when it will deliver the final production solution for the 2020 Census. All projects are scheduled to end by September 2020 (see table 2 for more detail). In 2013, the CEDCAP program office estimated that the program would cost about $548 million to deliver its projects from 2015 to 2020. In July 2015, the Bureau’s Office of Cost Estimation, Analysis, and Assessment completed an independent cost estimate for CEDCAP that projected the projects to cost about $1.14 billion from 2015 to 2020 ($1.26 billion through 2024). Bureau officials reported that as of March 2016, the projects have collectively spent approximately $92.1 million—17 percent of the total program office estimate and 8 percent of the independent cost estimate. According to Bureau officials, the CEDCAP program is currently budgeting its projects to the 2013 program office estimate. Table 2 summarizes the status of the 12 CEDCAP projects and their associated actual or potential IT solutions, and provides the cost estimates for each project, as well as the amount project officials reported spending as of March 2016. According to CMMI-ACQ and CMMI-DEV, an effective project monitoring and control process provides oversight of the program’s performance, in order to allow appropriate corrective actions if actual performance deviates significantly from planned performance. Key activities in project monitoring and control include determining progress against the plan by comparing actual cost and schedule against the documented plan for the full scope of the project and communicating the results; identifying and documenting when significant deviations in cost and schedule performance (i.e., deviations from planned cost and schedule that, when left unresolved, preclude the project from meeting its objectives) have occurred; taking timely corrective actions, such as revising the original plan, establishing new agreements, or including additional mitigation activities in the current plan, to address issues when performance deviates significantly from the plan; monitoring the status of risks periodically, which can result in the discovery of new risks, revisions to existing risks, or the need to implement a risk mitigation plan; and implementing risk mitigation plans that include sufficient detail such as start and completion dates and trigger events and dates, which provide early warning that a risk is about to occur or has just occurred and are valuable in assessing risk urgency. However, the three selected CEDCAP projects—Centralized Operational Analysis and Control project, Internet and Mobile Data Collection project, and Survey (and Listing) Interview Operational Control project—did not fully implement these practices. Specifically, the Centralized Operational Analysis and Control project fully met two of the practices in monitoring and controlling but did not meet the three other practices. The Internet and Mobile Data Collection project and Survey (and Listing) Interview Operational Control project fully met one of the best practices in monitoring and controlling, but partially met the other four best practices. Determining progress against the plan—Each of the three projects partially met this practice. Specifically, the three projects meet weekly to monitor the current status of each project and produce monthly reports that document cost and schedule progress. However, the projects’ planning documents lacked sufficient detail against which to monitor progress because their plans did not include details about the full scope of their projects for their entire life cycles. For example, while project officials have provided key information, such as when build-or-buy decisions were to be made, when the production systems are to be initially released, and when the final systems are to be released to support the 2020 Census, project planning documents for the three projects do not consistently include this information. This is especially problematic when the production systems that these projects are expected to produce need to be implemented in time for the 2018 end-to-end system integration test, which is to begin in August 2017 (in about a year). Bureau officials agreed with our concerns and in June 2016 they stated that they were in the process of updating the project plans and expect to be done by August 2016. It will be important that these plans include the full scope of these projects to enable the project managers and the CEDCAP program manager to determine progress relative to the full scope of the projects. Document significant deviations in performance—Each of the three selected projects partially met this practice. Specifically, the Bureau’s monthly progress reports capture schedule and cost variances and document when these variances exceed the threshold for significant deviation, which is 8 percent. For example, the Internet and Mobile data collection project had a cost variance of 20 percent in September 2015 and the Survey (and Listing) Interview Operational Control project had a cost variance of 25 percent in September 2015, which were flagged by the projects as exceeding the significant deviation threshold. However, the projects do not have up-to-date cost estimates with which to accurately identify significant deviations in cost performance. Specifically, the projects are measuring whether there are significant deviations in costs against their budgeted amounts, which are based on a 2013 CEDCAP program office estimate. This estimate was prepared well before the program began in October 2014 and, according to program officials, is out of date and was developed based on very early assumptions and limited details about the projects. For example, key information such as program requirements was not available when the 2013 program cost estimate was prepared. However, since then more information about the program has become available. For example, program requirements were finalized in May 2015. Program officials recognized that the program office estimate needs to be updated and stated that they planned to update their estimate once the program and its projects are better understood and more complete information is available, including the recently completed build-versus- buy decisions. However, until the program cost estimate is updated, the program lacks a basis for monitoring true cost variances for these three projects. Taking corrective actions to address issues when necessary— Each of the three selected projects met this practice. Specifically, the CEDCAP program has established a process for taking corrective actions to address issues when needed and, as of April 2016, Bureau officials stated they have not needed to take any corrective actions to address CEDCAP program issues. For example, while we found several significant deviations in cost and schedule for the three projects in the monthly progress reports, these did not require corrective actions because they were due to, for example, delays in contract payments, contract awards, and other obligations for hardware and software outside the control of the CEDCAP program office. Monitoring the status of risks periodically—One project (the Centralized Operational Analysis and Control project) fully met this practice and the other two projects partially met this practice. Specifically, the three projects monitor the status of their risks in bi- weekly project status meetings and monthly risk review board meetings, have established risk registers, and regularly update the status of risks in their registers. However, while according to Bureau officials the projects are to document updates on the status of their risks in their respective risk registers, the Internet and Mobile Data Collection and Survey (and Listing) Interview Operational Control projects do not consistently document status updates. For example as of April 2016, all four of the Internet and Mobile Data Collection project’s medium-probability, medium-impact risks had not been updated in its risk register in 9 months. Also as of April 2016, two of five of the Survey (and Listing) Interview Operational Control project’s medium-probability, medium-impact risks had not been updated in the risk register in 4 months. Bureau officials recognized the need to document updates in the risk registers more consistently and stated that efforts were under way to address this. In May 2016, Bureau officials provided updates on the Internet and Mobile Data Collection project risk list that showed improvement in documenting the status of risks for that project for 1 month. In June 2016, Bureau officials stated that they had taken additional steps to improve how risk registers are updated by, for example, conducting training sessions for project managers on the CEDCAP program’s expectations for monitoring and updating the status of risks, and reviewing project risk registers monthly to ensure compliance. Bureau officials also stated that they planned to release an updated version of the CEDCAP Risk Management Plan that more clearly outlines the process and expectations for monitoring and updating project risk registers by the end of August 2016. While the officials have taken positive steps, until actions are fully implemented to ensure that the projects are consistently documenting updates to the status of risks in a repeatable and ongoing manner, they will not have comprehensive information on how risks are being managed. Implementing risk mitigation plans—Each of the three selected projects partially met this practice. As of October 2015, the three projects had developed basic risk mitigation steps for each of the risks associated with the projects that required a mitigation plan. However, contrary to industry best practices and the Bureau’s risk management guidance, these risk mitigation plans lacked important details. Specifically, none of the mitigation plans for the three projects contained start or completion dates. Additionally, the Centralized Operational Analysis and Control and the Internet and Mobile Data Collection projects did not have any trigger events for their risks that require risk triggers (e.g., those risks that exceed a predefined exposure threshold). For example, the Centralized Operational Analysis and Control project had an active issue in March 2016 that if late requirements were given to the project, it would face delays in delivering for the 2016 Census test. This risk did not contain a detailed risk mitigation plan or a trigger date or description. In February 2016, Bureau officials recognized that there were issues with their risk management process and stated that they were working on addressing them. In April 2016, Bureau officials stated that they had revised their risk management process to increase the threshold for requiring risk mitigation plans and trigger events. However, the April 2016 risk registers did not contain any risks that exceeded the new risk threshold and, therefore, none of the risks required risk mitigation plans and trigger events. As a result, it is unclear to what extent the Bureau has improved its practices in developing detailed risk mitigation plans and assigning trigger events when required. Until the three projects establish detailed risk mitigation plans and trigger events for all of their risks that require both, they will not be able to identify potential problems before they occur and mitigate adverse impacts to project objectives. The CEDCAP and 2020 Census programs are intended to be on parallel implementation tracks and have major interdependencies; however, the interdependencies between these two programs have not always been effectively managed. Specifically, CEDCAP relies on 2020 Census to be one of the biggest consumers of its enterprise systems, and 2020 Census relies heavily on CEDCAP to deliver key systems to support its redesign. Thus, CEDCAP is integral to helping the 2020 Census program achieve its estimated $5.2 billion cost savings goal. Accordingly, as reported in the President’s Budget for Fiscal Year 2017, over 50 percent of CEDCAP’s funding for fiscal year 2017 ($57.5 million of the requested $104 million) is expected to come from the 2020 Census program. Nevertheless, while both programs have taken a number of steps to coordinate, such as holding weekly schedule coordination meetings and participating in each other’s risk review board meetings, the two programs lack processes for effectively integrating their schedule dependencies, integrating the management of interrelated risks, and managing requirements. Without effective processes for managing these interdependencies, the Bureau is limited in its ability to understand the work needed by both programs to meet agreed upon milestones, mitigate major risks, and ensure that requirements are appropriately identified. According to GAO’s Schedule Assessment Guide, major handoffs between programs should be discussed and agreed upon and all interdependencies in the programs’ schedules should be clearly identified and logically linked so that dates can be properly calculated. The guide also specifies that changes in linked activities can automatically reforecast future dates, resulting in a dynamic schedule, and that attempting to manually resolve incompatible schedules in different software can become time-consuming and expensive, and thus should be avoided. Moreover, constantly updating a schedule manually defeats the purpose of a dynamic schedule and can make the schedule particularly prone to error. According to best practices, if manual schedule reconciliation cannot be avoided, the parties should define a process to preserve integrity between the different schedule formats and to verify and validate the converted data whenever the schedules are updated. About half of CEDCAP’s major product releases (7 of 13) are intended to align with and support the major tests and operations of the 2020 Census. Accordingly, the CEDCAP and 2020 Census programs have both established master schedules that contain thousands of milestones and tens of thousands of activities through 2020 Census production and have identified major milestones within each program that are intended to align with each other. In addition, both program management offices have established processes for managing their respective master schedules. However, the CEDCAP and 2020 Census programs maintain their master schedules using different software where dependencies between the two programs are not automatically linked and are not dynamically responsive to change. Consequently, the two programs have been manually identifying activities within their master schedules that are dependent on each other, and rather than establishing one dependency schedule, as best practices dictate, the programs have each developed their own dependency schedule and meet weekly with the intent of coordinating these two schedules. In addition, the programs’ dependency schedules only include near-term schedule dependencies, and not future milestones through 2020 Census production. For example, as of February 2016, the dependency schedules only included tasks associated with the CEDCAP product release in support of the 2020 Census program’s 2016 Census Test through July 2016. According to Bureau officials, they are currently working to incorporate activities for the next set of near-term milestones in the dependency schedules, which are to support the 2016 Address Canvassing Test. Nonetheless, this process has proven to be ineffective, as it has contributed to the misalignment between the programs’ schedules. For example: The CEDCAP program originally planned to complete build-or-buy decisions for several capabilities by October 2016, while the 2020 Census timeline specified that these decisions would be ready by June 2016. In November 2015, CEDCAP officials stated that they recognized this misalignment and decided to accelerate certain build- or-buy decisions to align with 2020 Census needs. As of April 2016, while CEDCAP’s major product releases need to be developed and deployed to support the delivery of major 2020 Census tests, CEDCAP’s releases and 2020 Census major test milestones were not always aligned to ensure CEDCAP releases would be available in time. For example, development of CEDCAP release 7, which was intended to support the 2017 Census Test, was not scheduled to begin until almost a month after the 2017 Census Test was expected to begin (December 2016), and was not planned to be completed until about 2 months after the 2017 Census Test was expected to end (July 2017). Bureau officials acknowledged that CEDCAP release dates needed to be revised to accurately reflect the program’s current planned time frames and to appropriately align with 2020 Census time frames. Officials provided updated documentation in June 2016 which was intended to better align CEDCAP and 2020 Census time frames. However, officials also stated in June that following the May 2016 decision to acquire many of the CEDCAP solutions, the implementation schedule is being negotiated with the selected vendor and is subject to change. Adding to the complexity of coordinating the two programs’ schedules, as we testified in November 2015, several key decisions by the 2020 Census program are not planned to be made until later in the decade, which may impact CEDCAP’s ability to deliver those future requirements and have production-ready systems in place in time to conduct end-to-end testing, which is to begin in August 2017. For example, the Bureau does not plan to decide on the full complement of applications, data, infrastructure, security, monitoring, and service management for the 2020 Census— referred to as the solutions architecture—until September 2016. The Bureau also does not plan to finalize the expected response rates for all self-response modes, including how many households it estimates will respond to the 2020 survey using the Internet, telephone, and paper, until October 2017. Figure 4 illustrates several IT-related decisions which are not scheduled to be made until later in the decade and may impact CEDCAP’s ability to prepare for the end-to-end test and 2020 Census. Further exacerbating the difficulties with the two programs managing separate dependency schedules is that, as of May 2016 (a year and a half into the CEDCAP program), their process for managing the dependencies had not been documented. In response to our draft report, program officials provided documentation of this process. However, as previously mentioned, the current process is ineffective, thus documenting the as-is process does not help the Bureau. Further, while the programs intend to revise their respective schedules to correct the existing misalignments identified in this report, continuing to rely on an ineffective process for manually aligning their milestones will likely lead to future misalignment as changes occur. Thus, until the Bureau modifies its current process to ensure complete alignment between the 2020 and CEDCAP programs by, for example, maintaining a single dependency schedule, it will be limited in its ability to ensure that both programs are planning and measuring their activities according to the same agreed upon time frames. Moreover, until program officials document the modified process for managing the schedule dependencies, the Bureau cannot ensure that it has a repeatable process and that an integrated dependency schedule (if established) will stay current and help avoid future misalignments. According to GAO best practices on enhancing and sustaining collaboration among federal agencies, to achieve a common outcome, stakeholders should work together to jointly define and agree on their respective roles and responsibilities, and establish strategies that work in concert with each other or are joint in nature. Additionally, according to CMMI-ACQ and CMMI-DEV, effective risk management calls for stakeholder collaboration on identifying and mitigating risks that could negatively affect their efforts. Both the CEDCAP and 2020 Census programs have taken steps to collaborate on identifying and mitigating risks. For example, both programs have processes in place for identifying and mitigating risks that affect their respective programs, facilitate risk review boards, and have representatives attend each other’s risk review board meetings to help promote consistency. However, these programs do not have an integrated list of risks (referred to as a risk register) with agreed-upon roles and responsibilities to jointly track risks that heavily impact them and instead separately track these risks. This decentralized approach introduces two key problems. First, there are inconsistencies in tracking and managing interdependent risks. Specifically, selected risks were recognized by one program’s risk management process and not the other. This included the following examples as of March 2016: The CEDCAP program identified the lack of real-time schedule linkages as a high probability, high-impact risk in its risk register, which as of March 2016 had been realized and was considered an issue for the program. However, the 2020 Census program had not recognized this as a risk in its risk register. While CEDCAP had identified the ability to scale systems to meet the needs of the Decennial Census as a medium-probability, high-impact risk in its risk register, the 2020 Census program had not recognized this as a risk in its risk register. The CEDCAP program had identified the need to define how the Bureau will manage and use cloud services to ensure successful integration of cloud services with existing infrastructure as a low probability, high-impact risk in its risk register; however, the 2020 Census program had not recognized the adoption of cloud services as a formal risk in its risk register. This is especially problematic as the 2020 Census program recently experienced a notable setback regarding cloud implementation. Specifically, the 2020 Census program was originally planning to use a commercial cloud environment in the 2016 Census Test, which would have been the first time the Bureau used a cloud service in a major census test to collect census data from residents in parts of the country. However, leading up to the 2016 Census Test, the program experienced stability issues with the cloud environment. Accordingly, in March 2016, the 2020 Census program decided to cancel its plans to use the cloud environment in the 2016 Census Test. Officials stated that they plan to use the cloud in future census tests. According to 2020 Census program officials, they did not consider the lack of real-time schedule linkages to be a risk because they were conducting weekly integration meetings and coordinating with CEDCAP on their schedules to ensure proper alignment. However, as stated previously, attempting to manually resolve incompatible schedules in different software can be time-consuming, expensive, and prone to errors, and the Bureau’s process for managing schedule dependencies between the CEDCAP and 2020 Census programs is ineffective. Regarding the lack of scalability and cloud services risks in the 2020 Census risk log, 2020 Census program officials acknowledged that it was an oversight and that they should have been recognized by the program as formal risks. The second problem of not having an integrated risk register is that tracking risks in two different registers can result in redundant efforts and potentially conflicting mitigation efforts. For example, both programs have identified in their separate risk registers several common risks, such as risks related to late changes in requirements, integration of systems, human resources, build-or-buy decisions, and cybersecurity. These interdependent risks found in both risk registers can introduce the potential for duplicative or inefficient risk mitigation efforts and the need for additional reconciliation efforts. Until the Bureau establishes a comprehensive list of risks facing both the CEDCAP and 2020 Census programs, and agrees on their respective roles and responsibilities for jointly managing this list, it will continue to have multiple sets of inconsistent risk data and will be limited in its ability to effectively monitor and mitigate the major risks facing both programs. According to CMMI-ACQ and CMMI-DEV, requirements management processes are important for enabling programs to ensure that its set of approved requirements is managed to support the planning and execution needs of the program. Such a process should include steps to review and obtain commitment to the requirements from stakeholders and manage changes to requirements as customer needs evolve. The Bureau’s Office of Innovation and Implementation serves as the link in managing requirements between the 2020 Census and CEDCAP programs. This office is responsible for gathering and synthesizing business requirements across the Bureau, including from the 2020 Census program, and delivering them to CEDCAP. Additionally, for the 2020 Census program, the Bureau established the 2020 Census Systems Engineering and Integration program office, which is responsible for delivering 2020 Census business requirements to the Office of Innovation and Implementation. CEDCAP receives the requirements on an incremental basis, and as mentioned previously, CEDCAP builds functionality containing subsets of the requirements in the 40-day cycles. CEDCAP has delivered 12 increments of system functionality to support the first three releases. However, as of April 2016, the Office of Innovation and Implementation’s process for collecting and synthesizing requirements, obtaining commitment to those requirements from stakeholders, and managing changes to the requirements had been drafted, but not yet finalized. In July 2016, Bureau officials stated that due to the recent selection of a commercial vendor to deliver many of the CEDCAP capabilities, they do not plan to finalize this process until January 2017. Additionally, as of April 2016, the 2020 Census Systems Engineering and Integration program had not yet finalized its program management plan, which outlines, among other things, how it is to establish requirements to be delivered to the Office of Innovation and Implementation, which are then to be delivered to CEDCAP. According to program officials, they have been working on a draft of this plan and expect it to be finalized by the end of August 2016. As a result, the Bureau has developed three CEDCAP releases without having a fully documented and institutionalized process for collecting those requirements. In addition, the 2020 Census program identified about 2,500 capability requirements needed for the 2020 Census; however, there are gaps in these requirements. Specifically, we determined that of the 2,500 capability requirements, 86 should be assigned to a test prior to the 2020 Census, but were not. These included 64 requirements related to redistricting data program, 10 requirements related to data products and dissemination, and 12 requirements related to non-ID response validation. Bureau officials stated that the 74 redistricting data program and data products and dissemination requirements have not yet been assigned to a Census test because they have not yet gone through the Bureau’s quality control process, which is planned for later this calendar year. Regarding the 12 non-ID response validation requirements, Bureau officials stated that once this area is better understood, a more complete set of requirements will be established, and then they will assign the requirements to particular tests, as appropriate. As of April 2016, the Bureau was in the early stages of conducting research in this area. Thus, it has not tested non-ID response validation in the 2013, 2014, or 2015 Census tests. These tests were intended to, among other things, help define requirements around critical functions. With about a year remaining before the 2018 Census end-to-end test begins, the lack of experience and specific requirements related to non-ID response validation is especially concerning, as incomplete and late definition of requirements proved to be serious issues for the 2010 Census. Specifically, leading up to the 2010 Census, we reported in October 2007 that not fully defining requirements had contributed to both cost increases and schedule delays experienced by the failed program to deliver handheld computers for field data collection—contributing to an up to $3 billion overrun. Increases in the number of requirements led to the need for additional work and staffing. Moreover, we reported in 2009 and 2010 that the Bureau’s late development of an operational control system to manage its paper-based census collection operations resulted in system outages and slow performance during the 2010 Census. The Bureau attributed these issues, in part, to the compressed development and testing schedule. As the 2020 Census continues to make future design decisions and CEDCAP continues to deliver incremental functionality, it is critical to have a fully documented and institutionalized process for managing requirements. Additionally, until measures are taken to identify when the 74 requirements related to the redistricting data program and data products and dissemination will be tested, and to make developing a better understanding of, and identifying requirements related to, non-ID response validation a high and immediate priority, or to consider alternatives to avoid late definition of such requirements, the Bureau is at risk of experiencing issues similar to those it experienced during the 2010 Census. While the Bureau plans to extensively use IT systems to support the 2020 Census redesign in an effort to realize potentially significant efficiency gains and cost savings, this redesign introduces numerous critical information security challenges. Developing policies and procedures to minimize the threat of phishing—Phishing is a digital form of social engineering that uses authentic-looking, but fake, e-mails, websites, or instant messages to get users to download malware, open malicious attachments, or open links that direct them to a website that requests information or executes malicious code. Phishing attacks could target respondents, as well as Census employees and contractors. The 2020 Census will be the first one in which respondents will be heavily encouraged to respond via the Internet. The Bureau plans to highly promote the use of the Internet self-response option throughout the nation and expects, based on preliminary research, that approximately 50 percent of U.S. households will use this option. This will likely increase the risk that cyber criminals will use phishing in an attempt to steal personal information. A report developed by a contractor for the Bureau noted that criminals may pretend to be a census worker, caller, or website, to phish for personal information such as Social Security numbers and bank information. Further, phishing attacks directed at Census employees, including approximately 300,000 temporary employees, could have serious effects. The U.S. Computer Emergency Readiness Team (US-CERT) has recently reported on phishing campaigns targeting federal government agencies that are intended to install malware on government computer systems. These could act as an entry point for attackers to spread throughout an organization’s entire enterprise, steal sensitive personal information, or disrupt business operations. To minimize the threat of phishing, organizations such as US-CERT and the National Institute of Standards and Technology (NIST) recommend several actions for organizations, including communicating with users. Additionally, as we previously reported, in 2015 the White House and the Office of Management and Budget identified anti-phishing as a key area for federal agencies to focus on in enhancing their information security practices. Ensuring that individuals gain only limited and appropriate access to 2020 Census data—The Decennial Census plans to enable a public-facing website and mobile devices to collect personally identifiable information (PII) (e.g., name, address, and date of birth) from the nation’s entire population—estimated to be over 300 million. In addition, the Bureau is planning to obtain and store administrative records containing PII from other government agencies to help augment information that enumerators did not collect. Further, the 2020 Census will be highly promoted and visible throughout the nation, which could increase its appeal to malicious actors. Specifically, cyber criminals may attempt to steal personal information collected during and for the 2020 Decennial Census, through techniques such as social engineering, sniffing of unprotected traffic, and malware installed on vulnerable machines. We have reported on challenges to the federal government and the private sector in ensuring the privacy of personal information posed by advances in technology. For example, in our 2015 High Risk List, we expanded one of our high-risk areas—ensuring the security of federal information systems and cyber critical infrastructure—to include protecting the privacy of PII. Technological advances have allowed both government and private sector entities to collect and process extensive amounts of PII more effectively. However, the number of reported security incidents involving PII at federal agencies has increased dramatically in recent years. Because of these challenges, we have recommended, among other things, that federal agencies improve their response to information security incidents and data breaches involving PII, and consistently develop and implement privacy policies and procedures. Accordingly, it will be important for the Bureau to ensure that only respondents and Bureau officials are able to gain access to this information, and that enumerators and other employees only have access to the information needed to perform their jobs. Adequately protecting mobile devices—The 2020 Census will be the first one in which the Census Bureau will provide mobile devices to enumerators to collect PII from households who did not self- respond to the survey. The Bureau plans to use a contractor to provide approximately 300,000 census-taking-ready mobile devices to enumerators. The contractor will be responsible for, among other things, the provisioning, shipping, storage, and decommissioning of the devices. The enumerators will use the mobile devices to collect data from non-response follow-up activities. Many threats to mobile devices are similar to those for traditional computing devices; however, the threats and attacks to mobile devices are facilitated by vulnerabilities in the design and configuration of mobile devices, as well as the ways consumers use them. Common vulnerabilities include a failure to enable password protection and operating systems that are not kept up to date with the latest security patches. In addition, because of their small size and use outside an office setting, mobile devices are easier to misplace or steal, leaving their sensitive information at risk of unauthorized use or theft. In 2012 we reported on key security controls and practices to reduce vulnerabilities in mobile devices, protect proprietary and other confidential business data that could be stolen from mobile devices, and ensure that mobile devices connected to the organization’s network do not threaten the security of the network itself. For example, we reported that organizations can require that devices meet government specifications before they are deployed, limit storage on mobile devices, and ensure that all data on the device are cleared before the device is disposed of. Doing so can help protect against inappropriate disclosure of sensitive information that is collected on the mobile devices. Accordingly, we recommended, among other things, that the Department of Homeland Security, in collaboration with the Department of Commerce, establish measures about consumer awareness of mobile security. In September 2013, the Department of Homeland Security addressed this recommendation by developing a public awareness campaign with performance measures related to mobile security. Ensuring adequate control in a cloud environment—The Bureau has decided to use cloud solutions whenever possible for the 2020 Census; however, as stated previously, it has not yet determined all of the needed cloud capabilities. In September 2014, we reported that cloud computing has both positive and negative information security implications for federal agencies. Potential information security benefits include the use of automation to expedite the implementation of secure configurations on devices, reduced need to carry data on removable media because of broad network access, and low-cost disaster recovery and data storage. However, the use of cloud computing can also create numerous information security risks for federal agencies, including that cloud service vendors may not be familiar with security requirements that are unique to government agencies, such as continuous monitoring and maintaining an inventory of systems. Thus, we reported that, to reduce the risks, it is important for federal agencies to examine the specific security controls of the provider the agency is evaluating when considering the use of cloud computing. In addition, in April 2016, we reported that agencies should develop service-level agreements with cloud providers that specify, among other things, the security performance requirements—including data reliability, preservation, privacy, and access rights—that the service provider is to meet. Without these safeguards, computer systems and networks, as well as the critical operations and key infrastructures they support, may be lost; information—including sensitive personal information—may be compromised; and the agency’s operations could be disrupted. Adequately considering information security when making decisions about the IT solutions and infrastructure supporting the 2020 Census—Design decisions related to the 2020 Census will have security implications that need to be considered when making decisions about future 2020 Census design features. As described previously, as of April, the Census Bureau still had to make 350 decisions about the 2020 Census, and half of those have an IT component. For example, the Bureau has not yet made decisions about key aspects of its IT infrastructure to be used for the 2020 Census, including defining all of the components of the solution architecture (applications, data, infrastructure, security, monitoring, and service management), deciding whether it will develop a mobile application to enable respondents to submit their survey responses on their mobile devices, and deciding how it plans to use cloud providers. We have previously reported on challenges that the Bureau has had in making decisions in a timely manner. Specifically, in April 2014, and again in April 2015, we noted that key decisions had yet to be made about the 2020 Census, and noted that as momentum builds toward Census Day 2020, the margin for schedule slippages is getting increasingly slim. The Chief Information Security Officer echoed these concerns, stating that any schedule slippage can affect the time needed to conduct a comprehensive security assessment. As key design decisions are deferred and the time to make such decisions becomes more compressed, it is important for the Bureau to ensure that information security is adequately considered and assessed when making design decisions about the IT solutions and infrastructure to be used for the 2020 Census. Making certain key IT positions are filled and have appropriate information security knowledge and expertise—As our prior work and leading guidance recognize, having the right knowledge and skills is critical to the success of a program, and mission-critical skills gaps in such occupations as cybersecurity pose a high-risk to the nation. Whether within specific federal agencies or across the federal workforce, these skills gaps impede federal agencies in cost- effectively serving the public and achieving results. Because of this, we added strategic human capital management, including cybersecurity human capital, to our High Risk List in 2001, and it remains on that list today. These skills gaps are also a key contributing factor to our high-risk area of ensuring the security of federal information systems. As we reported in February 2015, although steps have been taken to close critical skills gaps in the cybersecurity area, it remains an ongoing problem and additional efforts are needed to address this issue government-wide. We also reported in February 2015, that the Bureau continues to have critical skills gaps, such as in cloud computing, security integration and engineering, enterprise/mission engineering life-cycle, requirements development, and internet data collection. The Bureau has made some progress in addressing its skills gaps and continues to work toward ensuring that key information security skills are in place. However, it has faced longstanding vacancies in key IT positions, such as the Chief Information Officer (vacant from July 2015 to June 2016) and the CEDCAP Chief Security Engineer (vacant since October 2015). Ensuring that key positions are filled with staff who have the appropriate expertise will be important to ensure that security controls are adequately designed in the systems used to collect and store census data. Ensuring that contingency and incident response plans are in place that encompass all of the IT systems to be used to support the 2020 Census—Because of the brief time frame for collecting data during the Decennial Census, it is especially important that systems are available for respondents to ensure a high response rate. Contingency planning and incident response help ensure that if normal operations are interrupted, network managers are able to detect, mitigate, and recover from a service disruption while preserving access to vital information. Implementing important security controls including policies, procedures, and techniques for contingency planning and incident response helps to ensure the confidentiality, integrity, and availability of information and systems, even during disruptions of service. However, we have reported on weaknesses across the federal government in these areas. Specifically, in April 2014 we estimated that federal agencies (including the Department of Commerce) had not completely documented actions taken in response to detected incidents reported in fiscal year 2012 in about 65 percent of cases. We made a number of recommendations to improve agencies’ cyber incident response practices, such as developing incident response plans and procedures and testing them. Adequately training Bureau employees, including its massive temporary workforce, in information security awareness—The Census Bureau plans to hire an enormous temporary workforce during the 2020 Census activities, including about 300,000 temporary employees to, among other things, use contractor-furnished mobile devices to collect personal information from households that have not yet responded to the Census. Because uninformed people can be one of the weakest links when securing systems and networks, information security awareness training is intended to inform agency personnel of the information security risks associated with their activities and their responsibilities in complying with agency policies and procedures designed to reduce these risks. However, ensuring that every one of the approximately 300,000 temporary enumerators is sufficiently trained in information security will be challenging. Providing training to agency personnel, such as this new and temporary staff, will be critical to securing information and systems. Making certain security assessments are completed in a timely manner and that risks are at an acceptable level—According to guidance from NIST, after testing an information system, authorizing officials determine whether the risks (e.g., unaddressed vulnerabilities) are acceptable and issue an authorization to operate. Each of the systems that the 2020 Census IT architecture plans to rely on will need to undergo a security assessment and obtain authorization to operate before it can be used for the 2020 Census. Properly configuring and patching systems supporting the 2020 Census—Configuration management controls ensure that only authorized and fully tested software is placed in operation, software and hardware are updated, information systems are monitored, patches are applied to these systems to protect against known vulnerabilities, and emergency changes are documented and approved. We reported in September 2015 that for fiscal year 2014, 22 of the 24 agencies in our review (including the Department of Commerce) had weaknesses in configuration management controls. Moreover, in April 2015, US-CERT issued an alert stating that cyber threat adversaries continue to exploit common, but unpatched, software products from vendors such as Adobe, Microsoft, and Oracle. Without strong configuration and patch management, an attacker may exploit a vulnerability not yet mitigated, enabling unauthorized access to information systems or enabling users to have access to greater privileges than authorized. The Bureau’s acting Chief Information Officer and its Chief Information Security Officer have acknowledged these challenges and described the Bureau’s plans to address them. For example, the Bureau has developed a risk management framework, which is intended to ensure that proper security controls are in place and provide authorizing officials with details on residual risk and progress to address those risks. In addition, the Bureau has also embedded three security engineers in the 2020 Census program to provide assistance and guidance to project teams. Bureau officials also stated that they are in the process of filling—or plan to fill— vacancies in key positions and intend to hire staff with expertise in key areas, such as cloud computing. To minimize the risk of phishing, Bureau officials note that they plan to contract with a company to monitor the Internet for fraudulent sites pretending to be the Census Bureau. Continued focus on these considerable challenges will be important as the Bureau begins to develop and/or acquire systems and implement the 2020 design. We have previously reported on Census Bureau weaknesses that are related to many of these information security challenges. Specifically, we reported in January 2013 that the Bureau had a number of weaknesses in its information security controls due, in part, to the fact that it had not fully implemented a comprehensive information security program. Thus, we made 13 public recommendations in areas such as security awareness training, incident response, and security assessments. We also made 102 recommendations to address technical weaknesses we identified related to access controls, configuration management, and contingency planning. As of June 2016, the Bureau had made significant progress in addressing these recommendations. Specifically, it had implemented all 13 public recommendations and 92 of 102 technical recommendations. For example, the Bureau developed and implemented a risk management framework with a goal of better management visibility of information security risks; this framework addressed a recommendation to document acceptance of risks for management review. We have work under way to evaluate whether the 10 remaining recommendations have been fully addressed. These recommendations pertain to access controls and configuration management, and are related to two of the security challenges we previously mentioned—ensuring individuals gain only limited and appropriate access, and properly configuring and patching systems. The Bureau’s progress toward addressing our recommendations is encouraging; however, completing this effort is necessary to ensure that sensitive information is adequately protected and that the challenges we outline in this report are overcome. The CEDCAP program’s 12 projects have the potential to offer numerous benefits to the Bureau’s survey programs, including the 2020 Census program, such as enabling an Internet response option; automating the assignment, controlling, and tracking of enumerator caseloads; and enabling a mobile data collection tool for field work. While the Bureau has taken steps to implement these projects, considerable work remains between now and when its production systems need to be in place to support the 2020 Census end-to-end system integration test—in about a year. Although the three selected CEDCAP projects had key project monitoring and controlling practices in place or planned, such as producing monthly progress reports, gaps exist in other important ways to monitor and control projects, such as the lack of detailed project plans, documentation of risk status updates, and complete risk mitigation plans. While officials plan to update the project plans with more detail, until the program and the selected projects address these other gaps, they are at risk of not adequately monitoring these projects. Given the numerous and critical dependencies between the CEDCAP and 2020 Census programs, their parallel implementation tracks, and the 2020 Census’ immovable deadline, it is imperative that the interdependencies between these programs are effectively managed. However, this has not always been the case. While actions such as weekly meetings to discuss the programs’ respective schedules demonstrate that the programs are trying to coordinate, additional actions would help better align the programs. Specifically, until the two programs establish schedules that are completely aligned, develop an integrated list of all interdependent risks, and finalize processes for managing requirements, both programs are at risk of not delivering their programs as expected. Finally, while the large-scale technological changes for the 2020 Decennial Census introduce great potential for efficiency and effectiveness gains, it also introduces many information security challenges, including educating the public to offset inevitable phishing scams. Continued focus on these considerable security challenges and remaining open recommendations will be important as the Bureau begins to develop and/or acquire systems and implement the 2020 Census design. To ensure that the Bureau is better positioned to deliver CEDCAP, we are recommending that the Secretary of Commerce direct the Director of the Census Bureau to take the following eight actions: Update the CEDCAP program office cost estimate to reflect the current status of the program as soon as appropriate information becomes available. Ensure that updates to the status of risks are consistently documented for CEDCAP’s Internet and Mobile Data Collection and Survey (and Listing) Interview Operational Control projects. Ensure that CEDCAP’s Internet and Mobile Data Collection, Survey (and Listing) Interview Operational Control, and Centralized Operational Analysis and Control projects establish detailed risk mitigation plans on a consistent basis and that the Internet and Mobile Data Collection and Centralized Operational Analysis and Control projects establish trigger events for all relevant risks. Define, document, and implement a repeatable process to establish complete alignment between CEDCAP and 2020 Census programs by, for example, maintaining a single dependency schedule. Establish a comprehensive and integrated list of all interdependent risks facing the CEDCAP and 2020 Census programs, and clearly identify roles and responsibilities for managing this list. Finalize documentation of processes for managing requirements for CEDCAP. Identify when the 74 requirements related to redistricting data program and data products and dissemination will be tested. Make developing a better understanding of and identifying requirements related to non-ID response validation a high and immediate priority, or consider alternatives to avoid late definition of such requirements. We received written comments on a draft of this report from the Department of Commerce, which are reprinted in appendix II. In its comments, the department agreed with all eight of our recommendations and indicated that it will be taking actions in response to our recommendations. The department also stated that for several of the recommendations it believed that some additional context should be included in our report, which we discuss below. First, the department reported that the Census Bureau decided on May 25, 2016, to use a commercial off-the-shelf solution in combination with in-house solutions for the data collection component of the 2020 Census, which is to be delivered by the CEDCAP program. We updated relevant statements in our report to reflect this recent decision. In response to our first recommendation to update the CEDCAP program office cost estimate to reflect the current status of the program as soon as appropriate information becomes available, the department agreed and noted that with the recent May 2016 decision, the Bureau has begun work on new program life cycle cost estimates. This is a positive step forward, and it will be important that the Bureau prepares a reliable estimate in a timely manner in order to have a basis for monitoring true cost variances for the three selected CEDCAP projects we reviewed. The department agreed with our fourth recommendation to define, document, and implement a repeatable process to establish complete alignment between CEDCAP and the 2020 Census program by, for example, maintaining a single dependency schedule. Specifically, the department stated that the Bureau must maintain schedule alignment between the 2020 Census and CEDCAP programs through a single integrated schedule. The department further stated that the 2020 Census is the program of interest and, as such, it must and will drive the schedule for all solutions that support it, including CEDCAP solutions. The department also stated that the 2020 Census manages its master schedule through the Primavera software package, as recommended earlier in the decade by GAO. However, GAO does not make recommendations on the use of specific software packages, and thus did not make such a recommendation to the Bureau. The Bureau’s selection of a software tool should have been based on an assessment of the costs, benefits, and risks associated with alternative solutions. Regardless of the software packages used, the 2020 Census schedule is dependent on the delivery of CEDCAP solutions and, as we state in the report, the process for integrating the schedule dependencies between these two programs is ineffective. Thus, we maintain that until the Bureau modifies its current process to ensure complete alignment between the 2020 Census and CEDCAP programs by, for example, maintaining a single dependency schedule, it will be limited in its ability to ensure that both programs are planning and measuring their activities according to the same agreed-upon time frames. In response to our fifth recommendation to establish a comprehensive and integrated list of all interdependent risks facing the CEDCAP and 2020 Census programs, the department agreed and stated that the 2020 Census program should better monitor interdependent risks through an integrated risk register. The department further stated that it uses an enterprise risk management tool to enable access to an integrated list of all active risks, including interdependent risks. It also stated that linkages between the two programs’ risk registers can be flagged and tracked using these processes and then used to ensure the same process for responding to emerging risks are followed. However, although the Bureau’s comments discuss capabilities of its enterprise risk management tool that could help address our recommendation, during our review the two programs’ interdependent risks were not being linked or tracked in an integrated manner, and we have not received additional evidence to demonstrate that further actions had been taken by the Bureau to address this. The department agreed with our seventh recommendation to identify when the 74 requirements related to redistricting data program and data products and dissemination will be tested. In its letter, the department stated that the Bureau had been conducting research on the highest- priority areas and activities relative to core operational and cost-efficiency requirements due to budget limitations this decade, but that planning and requirements development for these two areas are now under way. The Bureau intends to ensure that any testing needs are prioritized for inclusion in the 2018 end-to-end census test. Full implementation of our recommendation should help ensure that the requirements are sufficiently tested prior to the 2020 Census. In response to our eighth recommendation to make identifying requirements related to non-ID response validation a high and immediate priority, or consider alternatives to avoid late definition of such requirements, the department agreed that non-ID response requirements should continue to receive attention and priority. The department also noted that non-ID response validation operations have been conducted for several censuses, using a paper process, and that they validated these responses prior to tabulating the data to ensure that the response was legitimate and not duplicative. However, while the Bureau has prior experience conducting non-ID response validation operations in a controlled, small-scale paper-based environment, 2020 will be the first decennial census that the Bureau introduces an Internet response option on a wide scale, as well as the first time the Bureau attempts non-ID response collection via the Internet. According to Bureau officials, the use of the Internet to collect non-ID responses introduces the potential for a much higher volume of responses that will need to be validated than if the Bureau were to use a paper-based non-ID approach. The Bureau has also reported that Internet-based non-ID response collection increases the likelihood of fraudulent responses. Given that the Bureau is in the early stages of conducting research and developing requirements in this area and there is only a year remaining before the 2018 Census end-to-end test begins, we maintain that the lack of experience and specific requirements related to non-ID response validation is highly concerning, and our recommendation to make this a high and immediate priority or consider alternative approaches needs to be implemented. The department also stated that the Bureau has a long history of and demonstrated commitment to ensuring the confidentiality of the information provided by the public. It stated that implementing strong protections for all of its solutions and conforming to leading cybersecurity and fraud prevention best practices will be paramount for the self- response and field data collection operations. We agree and are encouraged by the significant progress the Bureau has made in addressing our prior recommendations on information security weaknesses. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Commerce, the Director of the U.S. Census Bureau, and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4456 or Chac@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to (1) describe the status of the 12 Census Enterprise Data Collection and Processing (CEDCAP) projects, (2) evaluate the extent to which the U.S. Census Bureau (Bureau) is implementing best practices in monitoring and controlling selected CEDCAP projects, (3) determine the extent to which the Bureau is adequately managing the interdependencies between the CEDCAP and 2020 Census programs, and (4) describe the key information security challenges the Bureau faces in implementing the 2020 Census design. To describe the status of the 12 CEDCAP projects, we reviewed relevant CEDCAP program and project documentation, such as the transition plan, segment architecture, project charters, monthly progress reports, and the program office cost estimate and independent cost estimate. We used the information in this documentation to summarize the 12 projects in terms of project objectives, overall time frames, estimated costs, and amount spent to date, among other things. We also interviewed Bureau officials, including the CEDCAP program manager, on the status and plans of all 12 projects. To evaluate the extent to which the Bureau is implementing best practices in monitoring and controlling selected CEDCAP projects, we selected three projects based on those that Bureau officials identified as being the highest priority for the 2020 Census—(1) Centralized Operational Analysis and Control Project, (2) Internet and Mobile Data Collection Project, and (3) Survey (and Listing) Interview Operational Control Project. We reviewed program management documentation for the selected projects against project monitoring and controlling best practices identified by the Software Engineering Institute’s Capability Maturity Model® Integration for Acquisition (CMMI®-ACQ) and for Development (CMMI-DEV). These key practices included determining progress against the plan, documenting significant deviations in performance, taking corrective actions to address issues when necessary, monitoring the status of risks periodically, and implementing the risk mitigation plan. Specifically, we analyzed program management documentation, including the program management plan, schedule management plan, risk management plan, transition plan, and segment architecture. We also analyzed documentation for each of the three selected projects, including project charters, schedules, risk registers, and monthly progress reports. Further, we interviewed Bureau officials, including the CEDCAP program manager and the project managers for each of the selected projects, on their efforts to manage these projects. We assessed the evidence against the best practices to determine whether each project fully, partially, or did not meet the best practices. Specifically, “met” means that the Bureau provided complete evidence that satisfies the entire criterion, “partially met” means the Bureau provided evidence that satisfies some but not all of the criterion, and “not met” means the Bureau provided no evidence that satisfies any of the criterion. To determine the extent to which the Bureau is adequately managing the interdependencies between the CEDCAP and 2020 Census programs, we compared program documentation related to managing interdependencies against best practices identified in CMMI-ACQ and CMMI-DEV, as well as by GAO. Specifically, we analyzed relevant documentation from both programs, such as risk management plans, program-level risk registers, master schedules, dependency schedules, program management plans, and requirements management documentation. We also reviewed the 2020 Census Operational Plan, artifacts from meetings of the CEDCAP and 2020 Census Executive Steering Committees, and presentations from the 2020 Census Program Management Review meetings. For assessing schedule dependencies between the CEDCAP and 2020 Census programs, we reviewed both programs’ master schedules and other program planning documents that contained major milestones and compared the dates against each other. We identified any potential misalignment in major milestones between the two programs and discussed these with CEDCAP and 2020 Census program officials. We summarize these misalignments in our report. We also interviewed Bureau officials from the CEDCAP and 2020 Census programs, including the CEDCAP program manager, Associate Director of Decennial Census Programs, the Chief of the Bureau’s Office of Innovation and Implementation, and the Bureau’s acting Chief Information Officer, on their approach to managing schedule, risk, and requirement interdependencies between the two programs. To describe the key information security challenges the Bureau faces in implementing the 2020 Census design, we reviewed documentation on the 2020 Census design—including the 2020 Census Operational Plan, CEDCAP and 2020 Census program risk registers, and a report developed by a contractor for the 2020 Census—and interviewed staff within the Bureau’s Office of the Chief Information Security Officer. We developed a list of key assumptions on the design of the 2020 Census based on the documentation and input from Bureau officials. We also reviewed reports by GAO and others on information security challenges faced across the federal government. We synthesized the information in these reports to determine which security practices were most important given the design assumptions of the 2020 Census, to develop an initial list of key challenges. We then obtained input on our initial list of challenges from relevant experts and the Census Bureau. Specifically, we identified relevant experts within two of the Bureau’s key advisory groups—the National Academy of Sciences and Census Scientific Advisory Committee. These advisory groups consist of academic and industry experts from various fields, including information technology, and they meet with the Bureau regularly to provide feedback on various areas, including the 2020 Census program. We also identified relevant experts within the information security field on GAO’s Executive Council on Information Management and Technology, including the Chair of the Association for Computing Machinery’s Committee on Computers and Public Policy; the Executive Director for the National Association of State Chief Information Officers; and the Executive Director of the Center for Education and Research in Information Assurance and Security. We provided our list of key information security challenges to these experts, and obtained their perspectives. Finally, we provided the list to the Census Bureau’s Acting Chief Information Officer and the Chief Information Security Officer, to gain their feedback on our list and allow them the opportunity to respond with the Bureau’s plans for the challenges. We also determined the Bureau’s progress in addressing recommendations from our 2013 public and limited official use only reports by reviewing Bureau documentation, such as the Bureau’s Risk Management Framework, IT Security Program Policy, and guidelines and procedures for incident response plan tests. We compared the agency documentation to the relevant information security best practices for each of the recommendations, to determine how many recommendations had been implemented. We conducted this performance audit from October 2015 to August 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, the following staff made key contributions to this report: Shannin G. O’Neill (Assistant Director), Jeanne Sung (Analyst in Charge), Andrew Beggs, Chris Businsky, Juana Collymore, Lee McCracken, and Kate Sharkey.
The Department of Commerce's U.S. Census Bureau plans to significantly change the methods and technology it uses to count the population with the 2020 Decennial Census. The Bureau's redesign of the census relies on the acquisition and development of many new and modified systems. Several of the key systems are to be provided by an enterprise-wide initiative called CEDCAP, which is a large and complex modernization program intended to deliver a system-of-systems for all survey data collection and processing functions. GAO's objectives for this review included (1) evaluating the extent to which the Bureau is implementing best practices in monitoring and controlling three selected CEDCAP projects, (2) determining the extent to which the Bureau is adequately managing the interdependencies between the CEDCAP and 2020 Census programs, and (3) describing key information security challenges the Bureau faces in implementing the 2020 Census design. GAO selected the three high-priority projects planned for the 2020 design; reviewed Bureau documentation such as project plans and schedules and compared them against relevant guidance; and analyzed information security reports and documents. The three selected Census Enterprise Data Collection and Processing (CEDCAP) projects (of 12 total) in GAO's review partially met best practices for monitoring and controlling. For example, the projects fully met the best practice of establishing a process for taking corrective actions if issues are identified, but they did not fully meet the practice of identifying significant performance deviations. Until project officials implement missing practices, they will be limited in their abilities to monitor and control costs, schedules, and performance. The 2020 Census program is heavily dependent upon CEDCAP to deliver the key systems needed to support the 2020 Census redesign. However, while the two programs have taken steps to coordinate their schedules, risks, and requirements, they lacked effective processes for managing their interdependencies. Specifically: Among tens of thousands of schedule activities, the two programs are expected to manually identify activities that are dependent on each other, and rather than establishing one integrated dependency schedule, the programs maintain two separate dependency schedules. This has contributed to misalignment in milestones between the programs. The programs do not have an integrated list of interdependent program risks, and thus they do not always recognize the same risks that impact both programs. Among other things, key requirements have not been defined for validating responses from individuals who respond to the census using an address instead of a Bureau-assigned identification number, because of the Bureau's limited knowledge and experience in this area. The lack of knowledge and specific requirements related to this critical function is concerning, given that there is about a year remaining before the Census end-to-end test begins in August 2017 (which is intended to test all key systems and operations to ensure readiness for the 2020 Census). Officials have acknowledged these weaknesses and reported that they are taking, or plan to take, steps to address the issues. However, until these interdependencies are managed more effectively, the Bureau will be limited in understanding the work needed by both programs to meet milestones, mitigate major risks, and ensure that requirements are appropriately identified. While the large-scale technological changes for the 2020 Decennial Census introduce great potential for efficiency and effectiveness gains, they also introduce many information security challenges. For example, the introduction of an option for households to respond using the Internet puts respondents at greater risk for phishing attacks (requests for information from authentic-looking, but fake, e-mails and websites). In addition, because the Bureau plans to allow its enumerators to use mobile devices to collect information from households that did not self-respond to the survey, it is important that the Bureau ensures that these devices are adequately protected. The Bureau has begun efforts to address many of these challenges; as it begins implementing the 2020 Census design, continued focus on these considerable security challenges will be critical. GAO is making eight recommendations to the Department of Commerce in the areas of project monitoring and control and in managing interdependencies related to schedule, risk, and requirements. The department agreed with all eight recommendations and indicated that it will be taking actions to address them.
Since 1974, the SSI program, under Title XVI of the Social Security Act, as amended, has provided benefits to low-income blind and disabled persons—including adults and children,individuals—who meet financial eligibility requirements and the definition of disability. For individuals under age 18, a disability is a medically determinable physical or mental impairment that results in marked and severe functional limitations, and is expected to result in death or which as well as certain aged has lasted or can be expected to last for a continuous period of at least 12 months. Families of children receiving SSI payments are generally required to use the benefit to meet a child’s needs, including food, clothing, and shelter. The maximum federal benefit payment for a child receiving SSI benefits in 2012 is $698 per month, regardless of the severity of the child’s impairment.monthly federal child payment was $592. The medical evaluation is conducted under applicable legal requirements and SSA policy. severe impairment that does not meet or medically equal any listing, the DDS will then determine whether the impairment results in limitations that functionally equal the listings. To aid in evaluating whether a child is medically eligible, DDS offices review various medical and nonmedical information about the child, such as physician notes, psychological tests, school records, and teacher assessments. In certain situations, such as when the evidence is not sufficient to support a decision as to whether a child is disabled, the DDS may purchase a consultative examination to assist in making the decision. If there is evidence that indicates the existence of a mental impairment, the DDS is supposed to make every reasonable effort to ensure that a qualified psychiatrist or psychologist has completed the medical portion of the case review. After the initial determination has been made and before returning the case file to complete any outstanding nondisability case development, SSA selects a sample of initial determinations for a quality assurance review. If the case is sampled, the reviewing component sends the case to the servicing field office upon completion of its review. If the claimant is determined to be disabled, the field office computes the benefit amount and initiates benefit payment. If the claim is denied, a claimant has 60 days to request that the DDS reconsider its decision. If the claimant is dissatisfied with the reconsideration, he or she may request a hearing before an administrative law judge, whose decision may then be reviewed by SSA’s Appeals Council. When these administrative review options have been exhausted, the claimant may request judicial review by filing an action in a federal district court. If SSA determines that an individual is disabled, the agency is required by law to conduct periodic reviews, known as continuing disability reviews (CDR), to verify the recipient’s continued medical eligibility for receiving benefits in certain circumstances. More specifically, SSA is generally required to perform CDRs (1) during the first year after birth for babies whose low birth weight is a contributing factor to the determination of disability and (2) at least once every 3 years for all other children under age 18 whose conditions are considered likely to improve. DDS offices determine when recipients will be due for CDRs on the basis of their potential for medical improvement, and select and schedule a review date—otherwise known as a “diary date”—for each recipient’s CDR. At the time of these reviews, the child’s representative payee generally must present evidence that the child is and has been receiving medically necessary and available treatment for his or her impairment. SSA is also generally required to redetermine the eligibility of children against the adult criteria for disability after they reach age 18. Since SSI’s inception, a number of policy changes have influenced how SSA makes disability decisions and the extent children with mental impairments are eligible to participate in the program. In 1984, Congress mandated the development of new disability standards for individuals with mental impairments and the consideration of the impact of multiple impairments in determining disability, among other things. SSA subsequently expanded the list of mental impairments it considers disabling in 1985 and again in 1990, when SSA added impairments such as ADHD. In 1990, the U.S. Supreme Court decided in Sullivan v. Zebley that SSA’s use of medical listings of impairments for children— without conducting a functional analysis—was incomplete. In response, SSA established “functional equivalence” as a basis for SSI eligibility for children, whereby a child can be found medically eligible for benefits if the child’s impairment limits his or her functional ability to the same degree as described in a listed impairment. In deciding whether an impairment functionally equals the listings, SSA examines how the child functions compared to children of the same age who do not have impairments— rather than basing the decision on the child’s medical diagnosis. The Court’s decision also resulted in the introduction of the individualized functional assessment. This assessment was intended to be comparable to SSA’s method for evaluating adult impairments and to broaden the evaluation of disability in children with physical and mental impairments to include the effects of impairments on a child’s ability to perform age- appropriate activities on a day-to-day basis. Awards to children, especially those with mental impairments, increased dramatically for several years following the Sullivan v. Zebley decision due partly to SSA readjudicating nearly 300,000 determinations made between January 1980 and February 1991 under the revised disability criteria. By 1994, SSA had reprocessed the majority of these cases, and subsequently returned to processing their normal case loads. The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA) changed the standard for children, and the act was expected to reduce the number of awards. However, awards to children with mental impairments began to increase again shortly after the legislation was enacted (see fig. 1). The number of children applying for and receiving SSI benefits due to a mental impairment has increased for more than a decade, and these children comprise a growing majority of all child recipients on the SSI disability rolls. While not all such children who are deemed medically eligible ultimately meet SSI’s financial eligibility requirements, the numbers of children applying for SSI benefits due to a mental impairment increased from 187,052 in fiscal year 2000 to 315,832 in fiscal year 2011 (a 69 percent increase). Despite this increase, SSA data indicated that the agency has denied a majority of these child applicants each year. In fact, for initial determinations in fiscal years 2000 to 2011, the average denial rates for children with physical and mental impairments were about 63 and 54 percent, respectively, and allowance rates have remained relatively stable over time for both groups of children. SSA data also showed that since fiscal year 2000, children with mental impairments represented the majority of all child applications and allowances for SSI benefits (see fig. 2). SSA data show that, for those children with mental impairments who apply, the number of children found medically eligible for benefits has increased for almost every mental impairment category—such as speech and language delay and mood disorder—for fiscal years 2000 to 2010, with the exception of intellectual disability. SSA data also show that the three most prevalent primary mental impairments among those children found medically eligible in fiscal year 2011 were for (1) ADHD, (2) speech and language delay, and (3) autism.indicate that applications and allowances for autism saw the largest percentage increases from fiscal years 2000 to 2011 (see fig. 3). (See app. III for trend information related to these three impairments.) DDS examiners rely on a combination of key medical and nonmedical information sources—such as medical records, effects of prescribed medications, school records, and teacher and parent assessments—in determining a child’s medical eligibility for benefits. Several DDS officials we interviewed said that when making a determination, they consider the totality of information related to the child’s impairments, rather than one piece of information in isolation. Based on our case file review, we estimate that examiners generally cited four to five information sources as support for their decisions in fiscal year 2010 for the three most prevalent mental impairments. While examiners relied on multiple information sources, we found that the extent they used these sources varied (see fig. 5). In more than 90 percent of the cases we reviewed, the examiner used some form of medical evidence to support the decision, regardless of whether the child’s impairment met, medically equaled, or functionally equaled the listings. SSA generally requires DDS examiners to assist children and their parents or guardians in obtaining medical records in an effort to develop at least a 1-year-long medical history prior to applying for benefits. We estimate that examiners used observations from a treating source, such as a pediatrician or psychologist, about a child’s functioning and testing by a treating source as support for 65 percent and 61 percent of their determinations, respectively, making them among the most commonly cited information sources. According to many of the DDS officials we interviewed, examiners attempt to obtain medical evidence, such as psychological tests, physician’s notes, and mental health records, for children with alleged mental impairments. If such evidence is not available or is inconclusive, DDS examiners may purchase a consultative exam to provide additional medical evidence and help them establish the severity of a child’s impairment. This examination is intended to provide the additional medical evidence, such as results of a physical examination and laboratory findings, needed for a determination. Based on our case file review, we estimate at least one consultative examination was present in 52 percent of the cases. In some cases, DDS offices requested multiple consultative examinations, such as both a psychological and speech and language evaluation to address different aspects of the alleged impairment. Consultative examinations also provide information on the severity of the child’s impairment. For example, one examination provider described a case in which a child with a speech and language delay had receptive and expressive language skills that were nearly 2 years behind his chronological age. We estimated that cases were more likely to be allowed if the consultative exam provider described the child’s impairment as severe. However, many DDS officials told us that such examinations are only a “snap-shot” in time and do not provide a longitudinal view of the child’s functioning. For this reason, some DDS officials said that information from a treating source with a long-standing relationship with the child, such as a physician, is more useful. In addition to medical evidence, SSA uses nonmedical information to evaluate the severity of the child’s impairment and functioning as part of These sources include parents, day care the eligibility determination. providers, teachers, and others knowledgeable about the child’s day-to- day behavior and activities. SSA field office staff may also provide observations about the child, if the child is present for the disability interview. (We estimate about 8 percent of child applicants were present at the field office for the disability interview.) Several DDS officials told us school records and teacher assessments (standardized questionnaires) are especially critical for determining medical eligibility because these assessments provide information on a child’s functioning over time and are generally more objective than parent assessments. According to some DDS examiners, parents primarily observe their child in an unstructured home environment after the child’s medications have worn off, and may not know what behaviors are developmentally normal, whereas teachers are generally in a position to compare the child to other children and provide neutral observations on how the child relates to peers, responds to medication, and performs in school. We estimate teacher assessments and school testing were used to support 63 and 43 percent of determinations, respectively. We also identified several examples in the case files we reviewed where the teacher’s assessment was used to establish the child’s level of functioning and response to medication. For example: To support an allowance in one autism case, the examiner noted “Per teacher, he is virtually nonverbal. The teacher confirms he is not toilet trained or independent in any area of self care.” To support a denial decision in one ADHD case, the examiner noted that the teacher’s assessment indicated that the child’s medication “has ‘helped tremendously’ with ability to concentrate.” Additionally, according to the teacher the child “has many friends and is very social. She has no problems interacting with others. Claimant has no problems with self care. She participates in the softball and dance team.” To support a denial decision in one autism case, the examiner reported that the “teacher…notes he is more controlled on his meds.” After the necessary information is collected to make a disability determination, several examiners said that they compare all the information to identify inconsistencies and assign weight to the various sources. For example, some officials told us examiners assess the credibility of parents’ assessments of children’s functioning by comparing it to physicians’ and teachers’ statements. SSA policy notes that an inconsistency does not necessarily mean that a determination cannot be made because often most of the evidence or the most substantial evidence outweighs the inconsistent evidence and additional information would not change the determination or decision. Among the 298 alleged ADHD, speech and language delay, and autism cases we examined, there were 25 in which material inconsistencies could not be resolved between sources, requiring the examiner to assign more or less weight to certain sources. Examiners assigned more weight to teacher assessments or information from school testing in 11 of the 25 cases. Examiners also generally assigned more weight to testing and observations of functioning by a consultative examiner (10 of the 25 cases) or by a treating source (10 of the 25 cases). In contrast, parents’ assessments were given less weight in 14 of the 25 cases, although decisions were made on a case by case basis. In one ADHD case, the child’s mother alleged a developmental delay, but a psychological consultative exam did not find evidence of such a delay. The child’s teacher also stated that the child performed well academically when not under timed conditions. In this case, the examiner gave less weight to the parent’s assessment and denied the claim. Despite a media report that prescription medication is considered by some parents as key to obtaining SSI benefits, we found that medication and treatment information is frequently a basis for denying benefits. SSA and DDS officials told us that medication is generally given no more weight than any other medical or nonmedical information in determining a child’s medical eligibility. In addition, several DDS officials told us medication is considered in the context of other sources of information as “just one piece of the puzzle.” Our case file review confirmed that information on medication and treatment was never the sole source of support for an allowance or denial. In fact, we found that applicants were more likely to be denied than allowed when medication was reported (see fig. 6). When applying for benefits, parents reported that their children were prescribed some form of medication in 58 percent of the cases we reviewed. Of cases where medication was reported as present, 65 percent were denied and 35 percent were allowed. By comparison, 47 percent of cases were denied and 53 percent were allowed when medication was not reported as present. We found that in cases in which psychotropic drug use was reported, applicants were also more likely to be denied. In these cases, 68 percent were denied and 32 percent were allowed. Nevertheless, our case file review suggests examiners did not decide whether to allow or deny a claim based on the absence or presence of medication. Although medication was reported as present in 58 percent of cases, it was only cited as support for a determination in 38 percent of cases. Beyond examining cases where parents reported that their children were prescribed medication, we also specifically looked at cases where examiners cited information on medication or treatment as part of the rationale for their determinations. We found examiners generally considered how the child responded to these interventions when making a determination, and in 66 percent of the cases where information on medication or treatment was used to support a determination, the applicant was denied. When examiners cited medication and treatment as a basis for denials, they noted that the child’s functioning improved due to these interventions. For example, in one denied ADHD case, the examiner wrote that the claimant “has responded well to medication and while on medication has no problems functioning, completing work on time and getting along with others.” In one denied speech and language delay case, the examiner noted that the claimant “has been through multiple therapies” and that “hese therapies have been successful.” To the extent that medication improves functioning, DDS officials told us they could potentially find that the child is not disabled under program rules. In contrast, in cases where the child’s functioning was not improved by medication, this information generally helped support an allowance. For example, in one allowed ADHD case, the examiner noted that the child was “ot able to complete work independently despite tx with psych meds and special supervision in a partial inclusion setting.” In another allowed ADHD case, the examiner observed that both the treating source and teacher’s assessment “indicate marked limitations in attention and concentration even with stimulant meds.” Despite the examiners’ focus on how medication affects functioning, certain field office and DDS officials acknowledged that they believe some parents are under the impression that medicating their children will improve their likelihood of being found eligible for benefits. For example, in one denied ADHD case the child’s mother did not cooperate with the DDS’s efforts to obtain a consultative exam. The mother argued the DDS should already have enough evidence to support an allowance because the child was taking medication. However, other DDS officials told us some parents may avoid medicating their child prior to a consultative examination so that the child misbehaves and appears more disabled—further reinforcing the importance of multiple tests and observations for determining eligibility. Despite the importance of nonmedical information in determining a child’s medical eligibility, examiners sometimes face challenges obtaining complete information. Several DDS offices reported difficulty obtaining school records or teacher assessments, which they partly attributed to school and teacher concerns about the time involved to compile this information, potential liability issues, or confusion about how such information is used in the disability decision-making process. For example, some DDS examiners told us that in certain instances teachers view their completion of the assessment as affirming that a child is disabled and thus endorsing SSA’s decision to award benefits. They do not understand that examiners base their determinations on the totality of evidence or that the assessment could be used to support a denial. In one of the cases we reviewed, a teacher returned a blank teacher assessment with a note stating “we are not allowed to fill these out anymore.” Our case file review estimated that teacher assessments were absent for 57 percent of cases for children age 7 or younger—which is unsurprising, given that many of these children may not yet be school age—but such assessments were also absent for 25 percent of cases for children older than age 7. To address this challenge, SSA officials told us that some DDS offices have dedicated staff to conduct outreach to schools in order to emphasize the importance of information from schools as an evidence source. However, they added that these staff have competing priorities, including recruiting consultative exam providers and other medical professionals, which limit the amount of outreach they can perform. In addition to strengthening relationships with school personnel, disability advocates told us that SSA could revise the teacher assessment by using clearer language to make it more inviting to teachers. They also noted that SSA could further emphasize that by completing the assessment, teachers are not endorsing SSA’s ultimate decision as to whether the child is disabled or qualifies for benefits. Because schools and teachers are not required to provide records or teacher assessments, some DDS offices pay a fee for school records, but state laws prevent others from doing so, according to SSA officials. SSA officials did not know the extent to which DDS offices have paid for school records or the amount they had paid. SSA officials informed us they have heard reports of some DDS offices facing challenges in obtaining information from schools, but they do not know the degree to which these challenges exist nationwide, nor has SSA conducted an empirical analysis of challenges related to obtaining information from schools. SSA did issue guidance on steps DDS offices can take to mitigate processing delays associated with obtaining school evidence during extended school breaks, such as summer vacation, but the agency has not issued guidance regarding year-round challenges associated with obtaining information from schools. Without further study to determine how widespread these obstacles are, it will remain unclear whether additional guidance is warranted. In addition to the challenges they sometimes face in obtaining information from schools, DDS examiners said that they do not routinely receive information from SSA field offices on multiple siblings receiving SSI benefits within the same household even though they are directed to be alert for such cases. SSA’s policy operations manual states that disabilities may occur in more than one member of a family or household, but notes prior case experience has shown this type of situation is an indicator of possible fraud or abuse, particularly where certain mental impairments are involved. For example, one of SSA’s Cooperative Disability Investigations Units investigated a case in which parents applied for SSI benefits on behalf of their four children, alleging that they all suffered from ADHD and conduct issues.found that the school guidance counselor had never observed them exhibiting symptoms of ADHD despite seeing the four children daily, and that a doctor had rescinded an order authorizing the school to administer ADHD medication to the children. In this instance, SSA subsequently denied the siblings’ applications for SSI benefits. SSA’s policy operations manual directs examiners to refer such cases to SSA’s Cooperative Disability Investigations Unit or Office of the Inspector General for further However, investigators development, if questionable issues cannot be resolved. Based on our interviews, it appears that SSA field offices do not consistently notify DDS examiners when an applicant’s siblings are already receiving SSI benefits, nor are they always made aware of concurrent sibling applications. SSA data indicate that as of January 2012, nearly 64,000 children, or 5 percent of all child recipients, resided in a household where more than 1 child received disability benefits. Without information on such children, DDS examiners may be limited in their ability to identify potential fraud or abuse in the program and elevate these cases to the attention of SSA’s fraud investigations unit. SSA has conducted significantly fewer CDRs for children receiving SSI benefits since 2000, even though SSA is generally required to perform CDRs at least every 3 years on child recipients under age 18 whose impairments are likely to improve, as well as certain other individuals (see fig 7). Childhood CDRs overall fell from more than 150,000 in fiscal year 2000 to about 45,000 reviews in fiscal year 2011 (a 70 percent decrease). More specifically, CDRs for children under age 18 with mental impairments declined from more than 84,000 to about 16,000 (an 80 percent decrease). Similarly, SSA has conducted significantly fewer CDRs for adult benefit recipients of either SSI or Social Security Disability Insurance (SSDI). From fiscal years 2000 to 2011, the number of adult CDRs fell from 584,000 to 179,000.proportion of childhood CDRs conducted has remained much lower than the proportion of adult CDRs conducted. SSA officials attribute the decrease in CDRs overall, including childhood CDRs for those with mental impairments, primarily to resource limitations and a greater emphasis on processing initial claims and reducing the backlog of requests for appeals hearings in recent years. While SSA did increase the number of CDRs it performed after receiving additional funding specifically targeted for CDRs from fiscal years 1996 to 2002, CDRs decreased once the funding expired. SSA, Strategic Plan: Security Value for America, Fiscal Years 2013-2016 (Feb. 2012). because DDS offices have not consistently collected secondary impairment data. Without steps to ensure that this information is more reliably recorded, SSA management will not have a complete picture of the characteristics of children with mental impairments receiving benefits or changes in this population over time. Because examiners sometimes lack key information for cases they review, including school records and information on multiple children receiving benefits in the same household, they may face challenges in making eligibility decisions and identifying potential fraud or abuse. Examiners have also increasingly based allowance decisions on a finding of functional equivalence for children with the most prevalent mental impairments, requiring more complex decision making. Yet because some examiners face obstacles in obtaining information from schools— which they consider critical to understanding how a child functions—SSA cannot ensure that examiners have the necessary information to arrive at the most accurate determinations. Additionally, as a key program gatekeeper, DDS examiners are in a unique position to identify program integrity threats related to multiple children receiving SSI benefits within the same household. However, without better information on these types of arrangements they are unable fulfill this role in preventing potential fraud and abuse. The fact that more than 430,000 childhood CDRs are overdue raises concerns about the agency’s ability to manage limited funds in a manner that adequately balances its public service priorities with its stewardship responsibility. When reviews are not conducted as scheduled, some child recipients may receive benefits for which they are no longer eligible, potentially costing taxpayers billions of dollars in overpayments. Furthermore, CDRs provide an important check on program growth by removing ineligible recipients from the rolls, even while new applicants are added. If these reviews are not conducted in sufficient numbers, the agency will continue to struggle to contain growth in benefit payments, placing added burden on already strained federal budgets. Congress appropriated funding for SSA to conduct more CDRs in recent years, and SSA is evaluating how to manage its overall CDR workload. However, because SSA considers SSI childhood CDRs a lower priority than other CDRs, it is unclear whether the agency will use this funding to review children most likely to medically improve—reviews that could yield a high return on investment. If SSA continues to rely heavily on the use of waivers to conduct fewer CDRs than would otherwise be required by law, SSA will potentially forgo future program savings. Furthermore, while we consider SSA’s decision to begin issuing formal waivers in order to clearly comply with the CDR legal requirement to be a good start, that action alone is not sufficient to fully alleviate our concerns with the waiver process. Until the agency formally implements this waiver process, the extent to which SSA is conducting CDRs consistently with its legal requirements will continue to be unclear. To strengthen eligibility decisions and improve monitoring of children with mental impairments within the SSI program, we recommend that the Commissioner of Social Security: 1. Direct the Deputy Commissioners of Retirement and Disability Policy and Operations to take steps to ensure that DDS examiners accurately record information on secondary impairments in order to improve SSA’s understanding of how multiple impairments may influence decisions. 2. Direct the Deputy Commissioner of Operations to identify the extent to which DDS examiners nationwide experience obstacles in obtaining teacher assessments and school records. To the extent these are identified, SSA should clarify the nature of these obstacles and formulate steps to address them. Such steps could include increased DDS outreach to primary and secondary schools, increased SSA coordination with the Department of Education, or additional guidance to DDS offices. 3. Direct the Deputy Commissioner of Operations to ensure that field offices notify their respective DDS offices of those claims in which multiple children within the same household are applying for or receiving SSI benefits so that examiners will be better able to identify potential fraud or abuse in the program and elevate these cases to the attention of SSA’s fraud investigations unit. 4. Direct the Deputy Commissioner of Quality Performance to eliminate the existing CDR backlog of cases for children with impairments who are likely to improve and, on an ongoing basis, conduct CDRs at least every 3 years for all children with impairments who are likely to improve, as resources are made available for these purposes. 5. Direct the Deputy Commissioner of Quality Performance and Deputy Commissioner of Operations to take actions to ensure that SSA’s CDR waiver process is open, transparent, and public. This may include promulgating formal guidance for issuing waivers, and a process for making information about issued waivers available to the public. We provided a draft of this report to SSA for review and comment. In its written comments, reproduced in appendix V, SSA agreed with 4 of our 5 recommendations and stated that our draft report overall reflected a good understanding of the disability determination process and the SSI childhood disability program. SSA disagreed with our recommendation to eliminate the existing CDR backlog of cases for children with impairments who are likely to improve and conduct CDRs for these children at least every 3 years, as resources are made available for these purposes. SSA agreed conceptually that it should complete more CDRs for SSI children but emphasized that it is constrained by limited funding and staff resources and as a result had to waive many required childhood CDRs in recent years. SSA also argued that performing additional SSI child CDRs would have negative impacts on the SSDI program. We acknowledge the challenge SSA faces as it strives to balance competing workloads. In recognition of the agency’s resource constraints, we noted in our recommendation that additional CDRs for children who are likely to medically improve should be conducted “as resources are made available for these purposes.” We also believe that the increased appropriations for CDRs in recent years provides SSA with added flexibility for balancing these competing workloads. Moreover, it is important to recognize we are not recommending that SSA eliminate its ongoing SSDI CDR efforts. Rather, we believe that more attention is needed for SSI children’s cases to address the existing backlog, especially given the relatively few CDRs conducted in this area in recent years, and the high average cessation rate for these cases. SSA also provided technical comments that we have incorporated, as appropriate. We are sending copies of this report to the Commissioner of Social Security, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our review focused on (1) the trends in the rate of children receiving Supplemental Security Income (SSI) benefits due to mental impairments; (2) the role that medical and nonmedical information, such as medication and school records, play in the initial determination of a child’s medical eligibility; and (3) the steps the Social Security Administration (SSA) has taken to monitor the continued medical eligibility of these children. To examine these issues, we analyzed SSA data on (1) the overall number of initial disability determinations and allowances, (2) annual benefit awards and recipients, (3) the number and types of mental impairments, (4) the number of children receiving SSI benefits residing in households where other children also receive SSI benefits, and (5) the number of continuing disability reviews of children conducted by SSA. In reviewing these data, we acknowledge that the child population in the United States has also grown since 2000 and demographics of this population may have changed since that time. We assessed the reliability of the data presented in this report by performing data testing, reviewing internal controls and related documentation, and interviewing agency officials, and found potential limitations with the extent to which primary and secondary impairment coding within SSA’s 831 Disability file—the file that contains data on disability determinations—may be complete. However, because the 831 Disability file is used by SSA to make, and thus reflect, the decisions made regarding medical determinations, we determined that these data were sufficiently reliable to describe certain trends among children in the SSI program. We also conducted in-depth interviews with SSA management and line staff at SSA headquarters and within six SSA regions—Atlanta, Georgia; Dallas, Texas; Chicago, Illinois; Philadelphia, Pennsylvania; Boston, Massachusetts; and San Francisco, California. Our work included site visits to 9 field offices within these regions, as well as 11 state disability determination services (DDS) offices (state agencies under the direction of SSA that perform medical eligibility determinations and continuing disability reviews of SSI applicants). We performed separate interviews with SSA field office district managers, supervisors, and claims representatives, and with DDS managers, supervisors, examiners, and medical or psychological consultants, when they were available. We selected these sites on the basis of their geographic location, high volume of SSI applications for children with mental impairments, and variety of benefit allowance rates for children with mental impairments. In addition, we interviewed numerous external experts from the medical and disability advocacy communities and reviewed relevant studies to identify factors that may be currently affecting the growth and composition of the childhood disability applicants and recipients, especially for those children with mental impairments. However, the relative effects of any potential factors we identified on the SSI program’s growth are not fully known and were beyond the scope of this report. We also reviewed relevant federal laws and regulations. We conducted a case file review to verify information obtained through our interviews with DDS office staff and to better understand the role of secondary impairments in determinations as well as what information examiners use when determining a child’s medical eligibility. We reviewed a probability sample of 298 case files selected from the 184,150 initial determinations decided in fiscal year 2010 for children with alleged attention deficit hyperactivity disorder (ADHD), speech and language delay, and autistic disorder and other pervasive development disorders (autism). (Through the initial determination process, the DDS assesses whether the child’s impairment can be established through medical evidence—not only by the individual’s statement of symptoms—as well as the severity of the impairment and whether the impairment results in marked and severe functional limitations.) We reviewed electronic case files for children with mental impairments and SSA forms to develop a standardized data collection instrument. We completed a data collection instrument for each initial determination in our sample, and each record was independently reviewed by another staff person for clarity and accuracy. We based our observations of the sources examiners used to support their determinations on examiners’ remarks in the Childhood Disability Evaluation Form (form SSA-538-F6) and the Disability Determination Explanation. Because our purpose was not to assess the appropriateness of examiners’ decisions but to understand what information sources examiners used in explaining the rationale for their decision-making, we did not attempt to adjudicate these cases ourselves. Our observations were limited by the extent to which examiners documented their analysis and rationale on these forms. We found the examiners’ remarks sufficient to characterize which sources were used to support decisions, but examiners provided varying levels of detail in their remarks and we had no basis for judging whether additional sources of information were used to support but were not reported. As with all probability samples, estimates from our case file review are subject to sampling errors. Sampling errors occur because we use a sample to draw conclusions about a larger population. If a different sample had been taken, the results might have been different. To recognize the possibility that other samples might have yielded other results, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. The 95 percent confidence interval is expected to include the population value for 95 percent of samples of this type. When we make estimates for this population, we are 95 percent confident that the results we obtained are within plus or minus 8 percentage points of what we would have obtained if we had included the entire population within our review, unless otherwise noted. The text of our report provides more specific confidence intervals for various estimates. We selected the sample from within six strata, consisting of allowance and denial decisions and the three most prevalent primary impairments among medical allowances for children with mental impairments—ADHD, speech and language delay, and autism. We sampled approximately the same number of cases from each stratum in order to ensure that the sample sizes were sufficient to produce precise estimates within each combination of impairment and decision. When generalizing to the overall population and to various subpopulations, we weighted each case according to its probability of selection, which varied across strata due to differences in the number of cases in the stratum populations. We conducted this performance audit from February 2011 to June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for findings and conclusions based on our audit objectives. The structure of the mental disorders listings for children under age 18 parallels the structure for the mental disorders listings for adults but is modified to reflect the presentation of mental disorders in children. Under federal regulations, when a child is not performing substantial gainful activity and the impairment is severe, the Social Security Administration (SSA) will examine whether the child’s impairment meets, medically equals, or functionally equals any of the impairments contained in the listings. The listings further describe the level of severity necessary to meet these requirements. The listings for mental disorders in children are grouped into 11 diagnostic categories: Organic mental disorders. Abnormalities in perception, cognition, affect, or behavior associated with dysfunction of the brain. The history and physical examination or laboratory tests, including psychological or neuropsychological tests, demonstrate or support the presence of an organic factor judged to be etiologically related to the abnormal mental state and associated deficit or loss of specific cognitive abilities, or affective changes, or loss of previously acquired functional abilities. Schizophrenic, delusional (paranoid), schizoaffective, and other psychotic disorders. Onset of psychotic features, characterized by a marked disturbance of thinking, feeling, and behavior, with deterioration from a previous level of functioning or failure to achieve the expected level of social functioning. Mood disorders. Characterized by a disturbance of mood (referring to a prolonged emotion that colors the whole psychic life, generally involving either depression or elation), accompanied by a full or partial manic or depressive syndrome. Mental retardation. Characterized by significantly sub-average general intellectual functioning with deficits in adaptive functioning. Anxiety disorders. In these disorders, anxiety is either the predominant disturbance or is experienced if the individual attempts to master symptoms; for example, confronting the dreaded object or situation in a phobic disorder, attempting to go to school in a separation anxiety disorder, resisting the obsessions or compulsions in an obsessive compulsive disorder, or confronting strangers or peers in avoidant disorders. Somatoform, eating, and tic disorders. Manifested by physical symptoms for which there are no demonstrable organic findings or known physiologic mechanisms; or eating or tic disorders with physical manifestations. Personality disorders. Manifested by pervasive, inflexible, and maladaptive personality traits, which are typical of the child’s long-term functioning and not limited to discrete episodes of illness. Psychoactive substance dependence disorders. Manifested by a cluster of cognitive, behavioral, and physiologic symptoms that indicate impaired control of psychoactive substance use with continued use of the substance despite adverse consequences. Autistic disorder and other pervasive developmental disorders. Characterized by qualitative deficits in the development of reciprocal social interaction, in the development of verbal and nonverbal communication skills, and in imaginative activity. Often, there is a markedly restricted repertoire of activities and interests, which frequently are stereotyped and repetitive. Attention deficit hyperactivity disorder. Manifested by developmentally inappropriate degrees of inattention, impulsiveness, and hyperactivity. Developmental and emotional disorders of newborn and younger infants (birth to attainment of age 1): Developmental or emotional disorders of infancy are evidenced by a deficit or lag in the areas of motor, cognitive/communicative, or social functioning. These disorders may be related either to organic or to functional factors or to a combination of these factors. According to SSA, these listings are examples of common mental disorders that are severe enough to result in a child being disabled. When a child has a medically determinable impairment that is not listed, an impairment that does not meet the requirements of a listing, or a combination of impairments in which none meets the requirements of a listing, SSA will make a determination whether the child’s impairment or impairments medically or functionally equal the listings. This can be especially important in older infants and toddlers (age 1 to attainment of age 3), who may be too young for identification of a specific diagnosis, yet demonstrate serious functional limitations. Therefore, the determination of equivalency is necessary to the evaluation of any child’s case when the child does not have an impairment that meets a listing. Social Security Administration (SSA) data show that the three most prevalent primary mental impairments among those children allowed for Supplemental Security Income (SSI) benefits in fiscal year 2011 were for (1) attention deficit disorder or attention deficit hyperactivity disorder (ADHD), (2) speech and language delay, and (3) autistic disorder and other pervasive development disorders (autism). These data are based on the primary impairment as designated by the disability determination services (DDS) examiner. SSA’s policy operations manual directs DDS examiners to code the primary impairment as the most severe condition that rendered the child disabled. However, SSA officials have acknowledged that primary impairment codes are sometimes missing or inaccurately coded. The following information provides a brief summary of each of these three primary impairments as they compare to the incidence of all mental impairments, as well as in terms of the proportion of applications, allowances, and receipts. Data represented as “applications” reflect SSI benefit claims where a DDS examiner made an initial disability determination decision. Some applications may have been submitted prior to the year when a determination was made. In addition, some applications could have more than one determination if the claim is selected for a quality review or if the disability claim is updated during the same year. ADHD. From fiscal years 2000 to 2011, applications for this condition as a primary impairment more than doubled, from about 55,204 to 124,217, while allowances have also doubled from 13,857 to 29,872 (see fig. 10). By December 2011, almost 221,000 such children were receiving SSI benefits, and they comprised 26 percent of child recipients with mental impairments on the rolls. While children with ADHD represent the single largest primary diagnostic group, SSA has denied the majority of ADHD child applicants since fiscal year 2000, because they were not medically eligible. Some DDS examiners we interviewed said that they rarely find a child medically eligible for benefits solely on the basis of a ADHD impairment alone, but more commonly in combination with another impairment, such as oppositional defiant disorder. In our case file review, we found 37 of 50 ADHD allowances had a secondary impairment present, and oppositional defiant disorder was the secondary impairment cited most frequently in the individual cases we reviewed. SSA officials suggested that the increase in both applications and allowances for children with ADHD might be attributable to an increase in diagnoses over the last decade, and cited a National Institute of Health survey finding that ADHD diagnoses had increased by 3 percent, on average, from 1996 to 2006 and by 5.5 percent, on average, from 2003 to SSA officials also noted a 2008 medical study reporting that ADHD 2007. is one of the most commonly diagnosed childhood neurobehavioral disorders.that attention deficit disorder and ADHD are among the most common childhood disorders in the United States. In addition, the National Institute of Mental Health has stated Speech and language delays. Since fiscal year 2000, both applications and allowances for children with speech and language delays have increased overall, but the proportion of applicants found medically eligible has ranged from 54 to 61 percent during this period. From fiscal year 2000 to 2011, applications for this impairment more than doubled, from 21,615 to 51,740 while the number of children allowed increased from 11,565 to 29,309 (see fig. 11). Some DDS officials we interviewed attributed the increased number of children applying for and receiving SSI benefits to speech and language delay to increased school testing and screening program services offered under the Individuals with Disabilities Education Act (IDEA). The U.S. Department of Education noted in their latest annual report that teachers indicated that 89 percent of the children aged 3 through 5 years served under IDEA received speech or language therapy in the 2003 to 2004 school year, and 86 percent received it in the 2004 to 2005 school year, making it the most common service in both years.that speech and language impairments were one of the most common disability categories among students aged 6 through 21 years served under IDEA, Part B, in the fall of 2006. Of these more than 6 million students aged 6 through 21 years, about 1.2 million, or 19.1 percent, received services due to a speech and language impairment. In addition, they noted Some speech and language experts from across the United States told us that they were surprised by the increased number of children receiving SSI benefits, but acknowledged that the definitions of disability for IDEA and the SSI program are different. They added that in some instances speech and language disorder may be a provisional diagnosis for very young children when it may be difficult to pinpoint a specific impairment or impairments, which they believed could be contributing to program growth. SSA officials told us that further study was needed to better understand increases of this impairment. As of February 2012, SSA was considering whether to propose new rules for evaluating language and speech and disorders. Autism. From fiscal year 2000 to 2011, autism applications increased by almost 400 percent from 5,430 to 26,739, and allowances increased similarly from 5,050 to 22,931 (see fig.12). As of December 2011, about 107,000 (12 percent) children with mental impairments were receiving SSI benefits due to autistic disorders. From fiscal year 2000 to 2011, DDS examiners found from 86 to 94 percent of those children applying for SSI on the basis of autism medically eligible for benefits. SSA officials primarily attribute the increase in the number of autism applications and allowances over the years to greater incidences of autism among children and explained that some children who may have previously been diagnosed as intellectually disabled are instead being diagnosed as autistic. In fact, the number of children applying for and receiving SSI benefits due to “intellectual disability” or “mental retardation” has significantly declined since fiscal year 2000. Children receiving benefits due to an intellectual disability comprised 51 percent of all mental claims in fiscal year 2000 and 15 percent in fiscal year 2011. According to one study SSA cited, the prevalence of autism in children has increased by 2.5 percent, from 0.6 per 1,000 live births in 1994 to 3.1 per 1,000 live births in 2003, while during the same period, the prevalence of mental retardation and learning disabilities declined by 2.8 and 8.3 per 1,000, respectively. In addition, the Centers for Disease Controls and Prevention estimated in March 2012 that on average 1 in 88 children in the United States has an autism spectrum disorder, but the extent to which this reflects increases in awareness and access to services or actual increases in the prevalence of autism symptoms is not known. On the basis of our case file review, we also identified some characteristics of children for whom SSA made an initial determination in fiscal year 2010 for ADHD, speech and language delay, and autism. For example, as shown in figure 13, more than 60 percent of these children had ADHD. The age at which SSA determined whether a child was medically eligible for benefits varied by impairment (see fig. 14). Children with ADHD who applied for benefits were older, on average, than applicants with autism or speech and language delay. Based on our case file review, we estimate that 72 percent of these children were male, although gender composition also varied by impairment (see fig. 15). As discussed in appendix I of this report, we reviewed case files from a stratified probability sample of determinations made in fiscal year 2010. In our review of a generalizable probability sample of 298 initial determinations performed in fiscal year 2010 for children with alleged attention deficit hyperactivity disorder (ADHD), speech and language delay, and autistic disorder and other pervasive development disorders (autism), we found parents reported that their children were prescribed some form of medication in 58 percent of these cases. More specifically, parents reported that their children were prescribed psychotropic drugs in 47 percent of these cases (see table 1). Children with ADHD accounted for the vast majority of those reportedly using medication or psychotropic drugs—79 percent and 90 percent, respectively (see table 2). The most commonly reported psychotropic drugs were Concerta, Ritalin, and Adderall, which are prescribed to treat ADHD, as well as Risperdal, which is an antipsychotic. Of children reportedly prescribed psychotropic drugs, the majority reported using one psychotropic drug. In addition to the contact named above, Jeremy Cox (Assistant Director), James Bennett, Alexander Galuten, Jason Holsclaw, Kristen Jones, Sheila McCoy, Luann Moy, Ernest Powell, Jeff Tessin, and Paul Wright made key contributions to this report and the related e-supplement.
SSA’s SSI program provides cash benefits to eligible low-income individuals with disabilities, including children. In 2011, SSA paid more than $9 billion to about 1.3 million disabled children, the majority of whom received benefits due to a mental impairment. GAO was asked to assess (1) trends in the rate of children receiving SSI benefits due to mental impairments over the past decade; (2) the role that medical and nonmedical information, such as medication and school records, play in the initial determination of a child’s eligibility; and (3) steps SSA has taken to monitor the continued medical eligibility of these children. To do this, GAO analyzed program data; interviewed SSA officials; conducted site visits to 9 field offices and 11 state DDS offices across the nation; reviewed a generalizable sample of 298 claims for select impairments from fiscal year 2010; reviewed relevant federal laws and regulations; and interviewed external experts, among others. The number of Supplemental Security Income (SSI) child applicants and recipients with mental impairments has increased substantially for more than a decade, even though the Social Security Administration (SSA) denied, on average, 54 percent of such claims from fiscal years 2000 to 2011. Factors such as the rising number of children in poverty and increasing diagnosis of certain mental impairments have likely contributed to this growth. In fiscal year 2011, the most prevalent primary mental impairments among children found medically eligible were (1) attention deficit hyperactivity disorder, (2) speech and language delay, and (3) autism, with autism claims growing most rapidly since fiscal year 2000. State disability determination services (DDS) examiners also consider the impact of additional, or “secondary,” impairments when making a decision, and when present, these impairments were used to support 55 percent of those cases GAO reviewed that were allowed in fiscal year 2010. However, SSA has not consistently collected those impairment data, limiting its understanding of how all impairments may affect decisions. DDS examiners generally rely on a combination of key medical and nonmedical information—such as medical records and teacher assessments—to determine a child’s medical eligibility for SSI. In its case file review, GAO found that examiners usually cited four to five information sources as the basis for their decision, and that being on medication was never the sole source of support for decisions. Moreover, examiners cited medication and treatment information, such as reports of improved functioning, as a basis for denying benefits in more than half of cases that GAO reviewed, despite a perception among some parents that medicating their child would result in an award of benefits. Examiners also reported they sometimes lacked complete information to inform their decision making. For example, several DDS offices reported obstacles to obtaining information from schools, which they believe to be critical in understanding how a child functions. Examiners also do not routinely receive information from SSA field offices on multiple children who receive benefits in the same household, which SSA’s fraud investigations unit has noted as an indicator of possible fraud or abuse. Without such information, examiners may be limited in their ability to identify threats to program integrity. SSA has conducted fewer continuing disability reviews (CDR) for children since 2000, even though it is generally required by law to review the medical eligibility of certain children at least every 3 years. From fiscal year 2000 to 2011, childhood CDRs overall fell from more than 150,000 to about 45,000 (a 70 percent decrease), while CDRs for children with mental impairments dropped from more than 84,000 to about 16,000 (an 80 percent decrease). The most recent data show that more than 400,000 CDRs were overdue for children with mental impairments, with some pending by as many as 13 years or more. Of the more than 24,000 CDRs found to be 6 or more years overdue, 25 percent were for children expected to medically improve within 6 to 18 months of their initial allowance. SSA acknowledged the importance of conducting such reviews, but said that due to resource constraints and other workloads, such as initial claims, most childhood CDRs are a lower priority. SSA’s process for issuing waivers from the CDR legal requirement lacks transparency, and without these reviews, SSA could continue to forgo significant program savings. GAO recommends that SSA take steps to ensure needed information, such as secondary impairment data and school records, is consistently collected; make its CDR waiver process more transparent; and conduct additional childhood CDRs. SSA agreed with four recommendations and disagreed with one that the agency conduct additional childhood CDRs, citing resource constraints. The GAO recommendation acknowledges resource constraints, as discussed more fully within the report.
Intellectual property is a category of intangible rights that protect commercially valuable products of the human intellect, such as inventions; literary and artistic works; and symbols, names, images, and designs used in commerce. U.S. protection of intellectual property has a long history: Article 1 of the U.S. Constitution grants the Congress the power “to promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” Copyrights, patents, and trademarks are the most common forms of protective rights for intellectual property. Protection is granted by guaranteeing proprietors limited exclusive rights to whatever economic reward the market may provide for their creations and products. Ensuring the protection of IPR encourages the introduction of innovative products and creative works to the public. Intellectual property is an important component of the U.S. economy, and the United States is an acknowledged global leader in the creation of intellectual property. According to USTR, “Americans are the world’s leading innovators, and our ideas and intellectual property are a key ingredient to our competitiveness and prosperity.” However, industries estimate annual losses stemming from violations of intellectual property rights overseas are substantial. Further, counterfeiting of products such as pharmaceuticals and food items fuels public health and safety concerns. USTR’s Special 301 annual reports on the adequacy and effectiveness of intellectual property protection around the world demonstrate that, from a U.S. perspective, intellectual property protection is weak in developed as well as developing countries and that the willingness of countries to address intellectual property issues varies greatly. U.S. laws have been passed that address the need for strong intellectual property protection overseas and provide remedies to be applied against countries that do not provide adequate or effective protection. For example, the Omnibus Trade and Competitiveness Act of 1988 allows the U.S. government to impose trade sanctions against such countries. Eight federal agencies, the FBI, and the USPTO undertake the primary U.S. government activities to protect and enforce U.S. intellectual property rights overseas. These agencies are the Departments of Commerce, State, Justice, and Homeland Security; USTR; the Copyright Office; USAID; and USITC. The U.S. government also participates in international organizations that address intellectual property issues, such as the World Trade Organization (WTO), the World Intellectual Property Organization (WIPO), and the World Customs Organization (WCO). The efforts of multiple U.S. agencies to protect U.S. intellectual property overseas fall into three general categories—policy initiatives, training and technical assistance, and U.S. law enforcement actions. USTR leads most U.S. policy activities, in particular the Special 301 review of intellectual property protection abroad. Most agencies involved in efforts to protect U.S. IPR overseas conduct training and technical assistance activities. However, the number of agencies involved in U.S. law enforcement actions is more limited, and the nature of these activities differs from other U.S. government actions related to intellectual property protection. U.S. policy initiatives to increase intellectual property protection around the world are primarily led by USTR, in coordination with the Departments of State and Commerce, USPTO, and the Copyright Office, among other agencies. These efforts are wide ranging and include the annual Special 301 review of intellectual property protection abroad, use of trade preference programs for developing countries, negotiation of agreements that address intellectual property, and several other activities. A centerpiece of policy activities is the annual Special 301 process. “Special 301” refers to certain provisions of the Trade Act of 1974, as amended, that require USTR to annually identify foreign countries that deny adequate and effective protection of intellectual property rights or fair and equitable market access for U.S. persons who rely on intellectual property protection. USTR identifies these countries with substantial assistance from industry and U.S. agencies and publishes the results of its reviews in an annual report. Once a pool of such countries has been determined, the USTR, in coordination with numerous agencies, is required to decide which, if any, of these countries should be designated as a Priority Foreign Country (PFC). If a trading partner is identified as a PFC, USTR must decide within 30 days whether to initiate an investigation of those acts, policies, and practices that were the basis for identifying the country as a PFC. Such an investigation can lead to actions such as negotiating separate intellectual property understandings or agreements between the United States and the PFC or implementing trade sanctions by the U.S. government against the PFC if no satisfactory outcome is reached. In its annual Special 301 report, USTR also lists countries with notable but less serious intellectual property protection problems as, in order of decreasing severity, “Priority Watch List” countries and “Watch List” countries. Unlike PFCs, countries cited on these lists are not subject to automatic consideration for investigation. Between 1994 and 2004, the U.S. government designated three countries as PFCs—China, Paraguay, and Ukraine—as a result of intellectual property reviews (see table 1). China was initially designated as a PFC in 1994 owing to acute copyright piracy, trademark infringements, and poor enforcement. Paraguay was designated as a PFC in 1998 owing to high levels of piracy and counterfeiting resulting from an absence of effective enforcement, its status as a major point of transshipment for pirated or counterfeit products to other South American countries, and its inadequate IPR laws. The U.S. government negotiated separate bilateral intellectual property agreements with both countries to address these problems. These agreements are subject to annual monitoring, with progress cited in each year’s Special 301 report. Ukraine, where optical media piracy was prevalent, was designated a PFC in 2001. No mutual solution was found, and in January 2002, the U.S. government imposed trade sanctions in the form of prohibitive tariffs (100 percent) aimed at stopping $75 million worth of certain imports from Ukraine over time. These sanctions negatively affected Ukraine’s exports to the United States. U.S. data show that overall imports from Ukraine experienced a dramatic 70 percent decline from 2000 to 2003. U.S. trade data also show that U.S. imports of the items facing punitive tariffs (with one exception) declined by $57 million from 2000 to 2003. Since 2001, Ukraine has remained the sole PFC and the sanctions have remained in place. In early 2002, according to Department of State officials, Ukraine passed an optical disc licensing law—a key U.S. factor in originally designating Ukraine as a PFC. Further, the Ukrainian government reportedly closed plants that were pirating optical media products. However, the U.S. government remains concerned that the optical disc law is inadequate. Although it designated only three countries as PFCs between 1994 and 2004, the U.S. government has cited numerous countries—approximately 15 per year recently—on its Special 301 Priority Watch List. Of particular note, the European Union has been placed on this list every year since 1994, while India and Argentina have been on the list for 10 and 9 years, respectively, during that period. By virtue of membership in the WTO, the United States and other countries commit themselves not to take WTO-inconsistent unilateral action against possible trade violations involving IPR protections covered by the WTO but to instead seek recourse under the WTO’s dispute settlement system and its rules and procedures. This may impact any U.S. government decision regarding whether to retaliate against WTO members unilaterally with sanctions under the Special 301 process when those countries’ IPR problems are viewed as serious. U.S. IPR policy efforts also include use of the Generalized System of Preferences (GSP) and other trade preference programs administered by USTR. The GSP is a unilateral program intended to promote development through trade, rather than through traditional aid programs, by eliminating tariffs on certain imports from eligible developing countries. The GSP was originally authorized by the Trade Act of 1974; when it was reauthorized by the Trade and Tariff Act of 1984, new “country practice” eligibility criteria were added, including a requirement that beneficiary countries provide adequate and effective IPR protection. Petitions to withdraw GSP benefits from countries that do not meet this criterion can be filed as part of an annual GSP review and are typically filed by industry interests. Petitions are considered through an interagency process led by USTR, with input from the Departments of State and Commerce, among others. In administering the GSP program, USTR has led reviews of the IPR regimes of numerous countries and has removed benefits from some beneficiary countries because of IPR problems. Ukraine lost its GSP benefits in August 2001 (approximately 6 months before the imposition of sanctions that stemmed from Ukraine’s designation as a PFC under the Special 301 process) because of inadequate protection for optical media, and these benefits have not been reinstated. Adequate and effective IPR protection is required by other trade preference programs, including the Andean Trade Preference Act (ATPA), which provides benefits for Bolivia, Colombia, Ecuador, and Peru; the African Growth and Opportunity Act (AGOA); and the Caribbean Basin Initiative (CBI). USTR reviews IPR protection provided under these trade preference programs, and, according to USTR officials, GSP, which includes numerous developing countries, has been used more actively (in terms of reviews and actual removal of benefits) than ATPA, CBI, and AGOA. In fact, according to USTR officials, benefits have never been removed under ATPA or AGOA owing to IPR concerns. However, USTR officials emphasized that these programs and their provisions for intellectual property protection have been used effectively nevertheless. For example, one USTR official noted that in response to U.S. government concerns regarding whether Colombia was meeting ATPA eligibility criteria, the Colombian government implemented measures to, among other things, ensure the legitimate use and licensing of software by government agencies. USTR also pointed out that in Mauritius, an unresolved trademark counterfeiting concern for U.S. industry was specifically raised with the government of Mauritius as a follow-up to the annual review of the country’s eligibility for preferences under AGOA. Following bilateral discussions, this counterfeiting concern was addressed and resolved. Since 1990, the U.S. government has negotiated 25 IPR-specific agreements or understandings with foreign governments. USTR noted that USPTO and other agencies are responsible for leading negotiating efforts for such agreements (and the Copyright Office participates in negotiations as an adviser). According to USTR officials, IPR-specific agreements are sometimes negotiated in response to particular problems in certain countries and are monitored when a relevant issue arises. USTR has also negotiated an additional 23 bilateral trade agreements—primarily with countries of the former Soviet Union or Eastern Europe—that contain IPR provisions (see app. II for a listing of these agreements). In addition, the U.S. government, primarily USTR and USPTO (with input from the Copyright Office) participated actively in negotiating the WTO’s Agreement on Trade-Related Aspects of Intellectual Property (TRIPS), which came into force in 1995 and broadly governs the multilateral protection of IPR. TRIPS established new or improved standards of protection in various areas of intellectual property and provides for enforcement measures. Most of the U.S. government’s IPR-specific bilateral agreements and understandings were signed prior to the implementation of TRIPS or before the other country involved in each agreement joined, or acceded to, the WTO and was thus bound by TRIPS commitments. As a result, according to a USTR official, some U.S. bilateral agreements have become less relevant since TRIPS was implemented. One of USTR’s priorities in recent years has been negotiating free trade agreements (FTAs). Since 2000, USTR has completed negotiations for FTAs with Australia, Bahrain, Central America, Chile, Jordan, Morocco, and Singapore. According to officials at USTR, these agreements offer protection beyond that required in TRIPS, including, for example, adherence to new WIPO Internet treaties, a longer minimum time period for copyright protection, additional penalties for circumventing technological measures controlling access to copyrighted materials, transparent procedures for protection of trademarks, stronger protection for well-known marks, patent protection for plants and animals, protection against arbitrary revocation of patents, new provisions dealing with domain name disputes, and increased enforcement measures. A formal private sector advisory committee that advises the U.S. government on IPR issues has provided feedback to the U.S. government on free-trade agreement negotiations, including reports on the impact of free-trade agreements on IPR industries in the United States. The U.S. government is actively involved in the activities of the WTO, WIPO, and WCO that address IPR issues. The U.S. government participates in the WTO primarily through the efforts of the USTR offices in Washington, D.C., and Geneva and participates in WIPO activities through the Department of State’s Mission to the United Nations in Geneva and through the Copyright Office and the USPTO. The Department of Homeland Security (DHS) works with the WCO on border enforcement issues. The WTO, an international organization with 147 member states, is involved with IPR primarily through its administration of TRIPS. In addition to bringing formal TRIPS disputes to the WTO (discussed in the following section on strengthened foreign IPR laws), the U.S. government participates in the WTO’s TRIPS Council. The council, which is comprised of all WTO members, is responsible for monitoring the operation of the TRIPS agreement and can be used by members as a forum for mutual consultation about TRIPS implementation. Recently the council has addressed issues such as TRIPS and public health. A WTO IPR official stated that the U.S. government is the most active “pro-IPR” delegate during council activities. The U.S. government is also a major contributor to reviews of WTO members’ overall country trade policies; these reviews are intended to facilitate the smooth functioning of the multilateral trading system by enhancing the transparency of members’ trade policies. All WTO member countries are reviewed, and the frequency of each country’s review varies according to its share of world trade. According to a USTR official in Geneva, IPR is often a central topic of discussion during the trade policy reviews, and the U.S. government poses questions regarding a country’s compliance with TRIPS when relevant. The United States also provides input as countries take steps to accede to the WTO, and, according to the USTR official, IPR is always a primary issue during this process. As of June 2004, 26 countries were working toward WTO accession. The Department of State, the Copyright Office, and USPTO actively participate in the activities of WIPO, a specialized United Nations agency with 180 member states that promotes the use and protection of intellectual property. Of particular note, WIPO is responsible for the creation of two “Internet treaties” that entered into force in 2002. In addition, WIPO administers the 1970 Patent Cooperation Treaty (PCT), which makes it possible to seek patent protection for an invention simultaneously in each of a large number of countries by filing an “international” patent application. According to a WIPO Vice Director General, the State Department’s U.S. Mission in Geneva and USPTO work closely with WIPO, and the U.S. government has actively participated in WIPO activities and monitored the use of WIPO’s budget. The Copyright Office also participates in various activities of the WIPO General Assembly and WIPO committees and groups, including the WIPO Standing Committee on Copyright and Related Rights. USPTO has participated in WIPO efforts such as the negotiation of the Internet treaties (the Copyright Office was also involved in this effort) and also conducts joint USPTO- WIPO training events. In addition, DHS works with the WCO regarding IPR protection. DHS participates in the WCO’s IPR Strategic Group, which was developed as a joint venture with international business sponsors to help member customs administrations to improve the efficiency and effectiveness of their IPR border enforcement programs. The IPR Strategic Group meets quarterly to coordinate its activities, discuss current issues on IPR border enforcement, and advise member customs administrations regarding implementation of border measures under TRIPS. Further, a DHS official emphasized that DHS has been involved in drafting WCO model IPR legislation and strategic plans geared towards global IPR protection and otherwise helping foreign countries develop the tools necessary for effective border enforcement programs. In countries where IPR problems persist, U.S. government officials maintain a regular dialogue with foreign government representatives. In addition to the bilateral discussions that are held as a result of the Special 301 process and other specific initiatives, U.S. officials address IPR as part of regular bilateral relations. We also noted that U.S. government officials at U.S. embassies overseas take the initiative, in coordination with U.S. agencies in Washington, D.C., to pursue IPR with foreign officials. For example, according to officials at the U.S. Embassy in Moscow, the economic section holds interagency IPR coordination meetings and has met regularly with the Russian ministry responsible for IPR issues to discuss U.S. concerns. In Ukraine, State Department officials told us that they communicate regularly with the Ukraine government as part of a dialogue regarding the actions needed for the removal of Special 301 sanctions. U.S. embassies also undertake various public awareness activities and campaigns aimed at increasing support for intellectual property in the general public as well as among specific populations, such as law enforcement personnel, in foreign countries. Further, staff from the Departments of State and Commerce at U.S. embassies interact with U.S. companies overseas and work to assist them with commercial problems, including IPR concerns, and have at times raised specific industry concerns with foreign officials. Finally, a Justice official told us that during the past 2 years, Justice attorneys engaged high-level law enforcement officials in China, Brazil, and Poland in an effort to bolster coordination on cross-border IPR cases. Diplomatic efforts addressing IPR have also included actions by senior U.S. government officials. For example, a senior official at the Commerce Department met in 2004 with the Brazilian minister responsible for industrial property issues, such as patents and trademarks, to discuss collaboration and technical assistance opportunities. In China, the U.S. Ambassador places a great emphasis on IPR and has organized an interagency task force that will work to implement an IPR Action Plan. In addition, presidential-level communication regarding IPR has occurred with some countries. For instance, according to Department of State sources, the Presidents of the United States and Russia discussed IPR, among other issues, when they met in September 2003. Further, USTR officials told us that the Presidents of the United States and Paraguay had IPR as an agenda item when they met in the fall of 2003. Most of the agencies involved in efforts to promote or protect IPR overseas engage in some training or technical assistance activities. Key activities to develop and promote enhanced IPR protection in foreign countries are undertaken by the Departments of Commerce, Homeland Security, Justice, and State; the FBI; USPTO; the Copyright Office; and USAID. These agencies also participate in an IPR Training Coordination Group. Training events sponsored by U.S. agencies to promote the enforcement of intellectual property rights have included enforcement programs for foreign police and customs officials, workshops on legal reform, and joint government-industry events. According to a State Department official, U.S. government agencies, including USPTO, the Department of Commerce’s Commercial Law Development Program, the Departments of Justice and Homeland Security have conducted intellectual property training for a number of countries concerning bilateral and multilateral intellectual property commitments, including enforcement, during the past few years. For example, intellectual property training has been conducted by a number of agencies over the last year in Poland, China, Morocco, Italy, Jordan, Turkey, and Mexico. We attended a joint USPTO-WIPO training event in October 2003 in Washington, D.C., that covered U.S. and WTO patent, copyright, and trademark laws and enforcement. About 35 participants from numerous countries, ranging from supreme court judges to members of national police forces, attended the event. An official at the State Department observed that the Special 301 report is an important factor in determining training priorities. Other agency officials noted additional factors determining training priorities, including embassy input, cost, and requirements of trade and investment agreements. Although regularly sponsored by a single agency, individual training events often involve participants from other agencies and the private sector. In addition to sponsoring seminars and short-term programs, agencies sponsor longer-term programs for developing improved intellectual property protection in other countries. For example, USAID funded two multiyear programs, the first of which began in 1996, aimed at improving the intellectual property regime in Egypt through public awareness campaigns, training, and technical assistance in developing intellectual property legislation and establishing a modern patent and trademark office. USAID has also sponsored longer-term bilateral programs that are aimed at promoting biotechnology and address relevant IPR issues such as plant variety protection. Private sector officials in Brazil told us that they believed the longer-term programs sponsored by USAID elsewhere would be helpful in Brazil. In addition to USAID, other U.S. agencies that sponsor training also provide other types of technical assistance in support of intellectual property rights. For example, the Copyright Office and USPTO revise and provide comments on proposed IPR legislation. Training and technical assistance activities that focus more broadly on institution building, biotechnology, organized crime, and other law enforcement issues may also support improved intellectual property enforcement. A small number of agencies are involved in enforcing U.S. intellectual property laws. Working in an environment where counterterrorism is the central priority, the FBI and the Departments of Justice and Homeland Security take actions that include engaging in multicountry investigations involving intellectual property violations and seizing goods that violate intellectual property rights at U.S. ports of entry. In addition, the USITC is responsible for some enforcement activities involving patents and trademarks. Although officials at the FBI, DHS, and Justice have emphasized that counterterrorism is the overriding law enforcement priority, these agencies nonetheless undertake IPR investigations that involve foreign connections. For example, the Department of Justice has an office that directly addresses international IPR problems. Justice has been involved with international investigation and prosecution efforts and, according to a Justice official, has become more aggressive in recent years. For example, Justice and the FBI recently coordinated an undercover IPR investigation, with the involvement of foreign law enforcement agencies. The investigation focused on individuals and organizations, known as “warez” release groups, that specialize in the Internet distribution of pirated materials. In April 2004, these investigations resulted in 120 simultaneous searches worldwide (80 in the United States) by law enforcement entities from 10 foreign countries and the United States in an effort known as “Operation Fastlink.” Law enforcement officials told us that IPR-related investigations with an international component can be instigated by, for example, industry complaints to agency headquarters or field offices. Investigations are pursued if criminal activity is suspected. U.S. officials noted that foreign law enforcement action may be encouraged by the U.S. government if an investigation results in evidence demonstrating that someone has violated U.S. law and if evidence in furtherance of the crime is located overseas. A Justice official added that international investigations are pursued when there is reason to believe that foreign authorities will take action and that additional impact, such as raising public awareness about IPR crimes, can be achieved. Evidence can be developed through investigative cooperation between U.S. and foreign law enforcement. In addition, the Justice official emphasized that the department also supports prosecutorial efforts in foreign countries. International cooperation between the United States and other countries can be facilitated through Mutual Legal Assistance Treaties (MLATs), which are designed to facilitate the exchange of information and evidence for use in criminal investigations and prosecutions. MLATs include the power to summon witnesses, compel production of documents and other real evidence, issue search warrants, and serve process. A Justice official emphasized that informal international cooperation can also be extremely productive. Although investigations can result in international actions such as those cited above, law enforcement officials from the FBI told us that they cannot determine the number of past or present IPR cases with an international component because they do not track or categorize cases according to this factor. DHS officials emphasized that a key component of their enforcement authority is a “border nexus.” Investigations have an international component established when counterfeit goods are brought into the United States, and DHS officials noted that it is a rare exception when DHS IPR investigations do not have an international component. However, DHS does not track cases by a specific foreign connection. The overall number of IPR-oriented investigations that have been pursued by foreign authorities as a result of DHS efforts is unknown. DHS seizures of goods that violated IPR totaled more than $90 million in fiscal year 2003. While the types of imported products seized have varied little from year to year (in recent years, products such as cigarettes, wearing apparel, watches, and media products—CDs, DVDs, and tapes— have been key products), the value of seizures for some of these products has varied greatly. For example, in fiscal year 1999, the value of seized media products—for example, CDs, DVDs, and tapes—was, at nearly $40 million, notably higher than the value of any other product; by 2003, the value of seized counterfeit cigarettes, at more than $40 million, was by far the highest, while media products accounted for less than $10 million in seizures. Seizures of IPR-infringing goods have involved imports primarily from Asia. In fiscal year 2003, goods from China accounted for about two- thirds of the value of all IPR seizures, many of them shipments of cigarettes. Other seized goods from Asia that year originated in Hong Kong and Korea. DHS has highlighted particular recent seizures, such as an estimated $500,000 in electrically heated coffee mugs bearing counterfeit Underwriters Laboratories (UL) labels and an estimated $644,000 in pirated video game CDs. A DHS official pointed out that providing protection against IPR-infringing imported goods for some U.S. companies— entertainment companies in particular—can be difficult, because companies often fail to record their trademarks and copyrights with DHS. The USITC investigates and adjudicates Section 337 cases, which involve allegations of certain unfair practices in import trade, generally related to patent or registered trademark infringement. Although the cases must involve merchandise originating overseas, both complainants and respondents can be from any country as long as the complainant owns and exploits an intellectual property right in the United States. U.S. administrative law judges are responsible for hearing cases and issuing an initial decision, which is then reviewed and issued, modified, or rejected by the USITC. If a violation has occurred, remedies include directing DHS officials to exclude infringing articles from entering the United States. The USITC may issue cease-and-desist orders to the violating parties. Violations of cease-and-desist orders can result in civil penalties. As of June 2004, exclusion orders remained in effect for 51 concluded Section 337 investigations, excluding from U.S. entry goods such as certain toothbrushes, memory chips, and video game accessories that were found to violate a U.S. intellectual property right. U.S. efforts have contributed to strengthened foreign IPR laws and international IPR obligations, and, while enforcement overseas remains weak, U.S. industry groups are generally supportive of U.S. efforts. U.S. actions are viewed as aggressive, and Special 301 is characterized as a useful tool in encouraging improvements overseas. However, the specific impact of many U.S. activities, such as diplomatic efforts or training and technical assistance, can be difficult to measure. Further, despite the progress that has been achieved, enforcement of IPR in many countries remains weak and, as a result, has become a U.S. government priority. Although U.S. industries recognize that problems remain, they acknowledge the many actions taken by the U.S. government, and industry representatives that we contacted in the United States and abroad were generally supportive of the U.S. efforts to pursue intellectual property protection overseas. Several representatives of major intellectual property industry associations stated that the United States is the most aggressive promoter of intellectual property rights in the world; an IPR official at the WTO concurred with this assessment, as did foreign officials. The efforts of U.S. agencies have contributed to the establishment of strengthened intellectual property legislation in many foreign countries. The United States has realized progress through bilateral efforts. For example, the Special 301 review has been cited by industry as facilitating the introduction or strengthening of IPR laws around the world over the past 15 years. In the 2004 Special 301 report, USTR noted that Poland and the Philippines had recently passed optical disc legislation aimed at combating optical media piracy; the 2003 Special 301 report had cited both countries for a lack of such legislation. Special 301 is cited by USTR and industry as an effective tool in alerting a country that it has trade problems with the United States, which is a key trading partner for numerous nations. Industry and USTR officials pointed out that countries are eager to avoid being publicly classified as problem nations. Further, according to U.S. government officials, incremental “invisible” changes take place behind the scenes as countries take actions to improve their standing on the Special 301 listing prior to its publication. USTR notes that legislative improvements have been widespread but also cites other accomplishments, such as raids against pirates and counterfeiters in Poland and Taiwan, resulting from U.S. attention and the Special 301 process. However, Special 301 can have an alienating effect when countries believe they have made substantial improvements in their IPR regimes but the report are still cites them as key problem countries. According to some officials we spoke with in Brazil and Ukraine, this happened in their countries. For example, although Ukrainian government officials we spoke with stated their desire to further respond to U.S. concerns, they expressed the view that the sanctions have run their course. They also said that the Ukrainian government cannot understand why Ukraine was targeted for sanctions while other countries where U.S. industry losses are higher have not been targeted. A USTR official responsible for IPR issues informed us that Ukraine was sanctioned because of IPR problems that the U.S. government views as serious. Additional bilateral measures are cited as successful in encouraging new improvements overseas in the framework for IPR protection. For example, following a 1998 U.S. executive order directing U.S. government agencies to ensure the legitimate use of software, USTR then addressed this issue with foreign governments and has reportedly achieved progress in addressing this violation of IPR. According to USTR, more than 20 foreign governments have issued decrees mandating that government ministries use only authorized software. As another example, the negotiation of FTAs has been cited by government and IPR industry officials as a useful tool, particularly as such agreements require IPR protections, including protection for digital products, beyond what is required in TRIPS. However, because most FTAs have been negotiated within the past 5 years, their long- term impact remains to be seen. U.S. efforts through multilateral forums have also had positive effects. For example, as a result of TRIPS obligations—which the U.S. government was instrumental in negotiating—many developing countries have improved their statutory systems for the protection of intellectual property. For example, China revised its intellectual property laws and regulations to meet its WTO TRIPS commitments. Further, in Ukraine and Russia, government officials told us that improvements to their IPR legislation was part of a movement to accede to the WTO. U.S. agencies have assisted other developing countries in drafting TRIPS-compliant laws. In addition, a WTO member country can bring disputes over TRIPS compliance to the WTO through that organization’s dispute settlement mechanism. The U.S. government has exercised this right and brought more TRIPS cases to the WTO for resolution than any other WTO member. Since 1996, the United States has brought a total of 12 TRIPS-related cases against 11 countries and the European Community (EC) to the WTO (see app. III for a listing of these cases). Of these cases, 8 were resolved before going through the entire dispute settlement process by mutually agreed solutions between the parties—the preferred outcome, according to a USTR official. In nearly all of these cases, U.S. concerns were addressed via changes in laws or regulations by the other party. Only 2 have resulted in the issuance of a final decision, or panel report, both of which were favorable rulings for the United States. In a case involving Argentina, consultations between the countries are ongoing and the case has been partially settled, and another case regarding an EC regulation protecting geographical indications is currently in panel proceedings. Despite the fact that persistent U.S. efforts have contributed to positive developments, it can be difficult to precisely measure the impact of specific U.S. activities such as policy efforts or training assistance programs. U.S. activities are not conducted in isolation, but are part of the spectrum of political considerations in a foreign country. Although regular efforts such as the annual Special 301 review or diplomatic contact may create incentives for countries to improve intellectual property protection, other factors, such as countries’ own political interests, may contribute to or hinder improvements. Therefore, it can be difficult to measure changes resulting from U.S. efforts alone. For example, China revised its intellectual property laws as a result of its accession to WTO. Although China had for some time been under pressure from the United States to improve its intellectual property protection, revisions to its intellectual property legislation were also called for by its newly acquired WTO commitments. Thus, it is nearly impossible to attribute any of these developments to particular factors or to precisely measure the influence of individual factors on China’s decision to reform. Further, officials at the U.S. Embassy in Moscow have emphasized that the regular U.S. focus on IPR issues has raised the profile of the issue with the Russian government—a positive development. However, once again, it is difficult to determine the specific current and future effects of this development on intellectual property protection. Nonetheless, despite these limitations, several agency officials we spoke with said that these activities are important and contribute to incremental changes in IPR protection (such as legislative improvements to Russia’s copyright law that were enacted in July 2004). A Commerce official also noted that regular contacts by U.S. government officials with their foreign counterparts have apparently helped some individual U.S. companies seeking to defend patent or trademark rights overseas by reminding foreign officials that their administrative proceedings for such protection are under U.S. scrutiny. Regarding training activities, officials at agencies that provide regular training reported using post-training questionnaires by attendees to evaluate the trainings, but several noted that beyond these efforts, assessing the impact of trainings is challenging. An official at USPTO stated that although he does not believe it is possible to quantify fully the impact of USPTO training programs, accumulated anecdotal evidence from embassies and the private sector has led the office to believe that the activities are useful and have resulted in improvements in IPR enforcement. USPTO recently began sending impact evaluation questionnaires to training attendees 1 year after the training, to try to gather more information on long-term impact. However, a low response rate has thus far limited the effectiveness of this effort. Officials from the Departments of State and Commerce also pointed out anecdotal evidence that training and technical assistance activities are having a positive impact on the protection of intellectual property overseas. Although some industry officials raised criticisms or offered suggestions for improving training, including using technology to offer more long-distance training and encouraging greater USAID involvement in coordination efforts, many were supportive of U.S. training efforts. Despite improvements in intellectual property laws, the enforcement of intellectual property rights remains weak in many countries, and U.S. government and industry sources note that improving enforcement overseas is now a key priority. USTR’s most recent Special 301 report states that “although several countries have taken positive steps to improve their IPR regimes, the lack of IPR protection and enforcement continues to be a global problem.” For example, although the Chinese government has improved its statutory IPR regime, USTR remains concerned about enforcement in that country. According to USTR, counterfeiting and piracy remain rampant in China and increasing amounts of counterfeit and pirated products are being exported from China. USTR’s 2004 Special 301 report states that “ddressing weak IPR protection and enforcement in China is one of the Administration’s top priorities.” Further, Brazil has adopted modern copyright legislation that appears to be generally consistent with TRIPS, but it has not undertaken adequate enforcement actions, according to USTR’s 2003 Special 301 Report. In addition, as noted above, although Ukraine has shut down offending domestic optical media production facilities, pirated products continue to pervade Ukraine, and, according to USTR’s 2004 Special 301 Report, Ukraine is also a major trans-shipment point and storage location for illegal optical media produced in Russia and elsewhere as a result of weak border enforcement efforts (see fig. 1). An industry official pointed out that addressing foreign enforcement problems is a difficult issue for the U.S. government. Although U.S. law enforcement does undertake international cooperative activities to enforce intellectual property rights overseas, executing these efforts can prove difficult. For example, according to DHS and Justice officials, U.S. efforts to investigate IPR violations overseas are complicated by a lack of jurisdiction as well as by the fact that U.S. officials must convince foreign officials to take action. Further, a DHS official noted that in some cases, activities defined as criminal in the United States are not viewed as an infringement by other countries, and U.S. law enforcement agencies can therefore do nothing. In particular, this official cited China as a country that has not cooperated in investigating IPR violations. However, according to DHS, recently the Chinese government assisted DHS in an undercover IPR criminal investigation (targeting a major international counterfeiting network that distributed counterfeit motion pictures worldwide) that resulted in multiple arrests and seizures. While less constrained than law enforcement, training and technical assistance activities may also be unable to achieve the desired improvements in IPR enforcement in some cases, even when considerable U.S. assistance is provided. For example, despite USAID’s long-term commitment to strengthening IPR protection in Egypt with training and technical assistance programs, Egypt was elevated to the Priority Watch List in the 2004 Special 301 report and IPR enforcement problems clearly persist. Despite the weakness of IPR enforcement in many countries, industry groups representing intellectual property concerns for U.S. industries we contacted were generally supportive of U.S. government efforts to protect U.S. intellectual property overseas. Numerous industry representatives in the U.S. and overseas expressed satisfaction with a number of U.S. activities as well as with their interactions and collaborations with U.S. agencies and embassies in support of IPR. Industry representatives have been particularly supportive of the Special 301 process, and many credited it for IPR improvements worldwide. According to an official from a key industry association, Special 301 “is a great statutory tool, it leads to strong and effective interagency coordination, and it gets results.” Industry associations overseas and in the U.S. support the Special 301 process with information based on their experiences in foreign countries. An entertainment software industry official stated that the U.S. government has “consistently demonstrated their strong and continuing commitment to creators…pressing for the highest attainable standards of protection for intellectual property rights….One especially valuable tool has been the Special 301 review process.” Other representatives have advocated increased use of leverage provided by trade preference programs, particularly the GSP program. Industry association officials in the United States and private sector officials in Brazil, Russia, and Ukraine also expressed support for U.S. IPR training activities, despite limited evidence of long-term impact. Industry associations regularly collaborate with U.S. agencies to sponsor and participate in training events for foreign officials. A number of government and law enforcement officials in our case study countries commented that training and seminars sponsored by the U.S. government were valuable as forums for learning about IPR. Others, including private sector officials, commented on the importance of training as an opportunity for networking with other officials and industry representatives concerned with IPR enforcement. Nonetheless, some industry officials acknowledged that U.S. actions cannot always overcome challenges presented by political and economic factors in other countries. Industry support occurs in an environment where, despite improvements such as strengthened foreign IPR legislation, the situation may be worsening overall for some intellectual property sectors. For example, according to copyright industry estimates, losses due to piracy grew markedly in recent years. The entertainment and business software sectors, for example, which are very supportive of USTR and other agencies, face an environment where their optical media products are increasingly easy to reproduce, and digitized products can be distributed around the world quickly and easily via the Internet. According to an intellectual property association representative, counterfeiting trademarks has also become more pervasive in recent years. Counterfeiting affects more than just luxury goods; it also affects various industrial goods. Several interagency mechanisms exist to coordinate overseas intellectual property policy initiatives, development and assistance activities, and law enforcement efforts, although these mechanisms’ level of activity and usefulness varies. The mechanisms include interagency coordination on trade (IPR) issues; the IPR Training Coordination Group, which maintains a database of training activities; the National Intellectual Property Law Enforcement Coordination Council; and the National IPR Coordination Center. Apart from formal coordination bodies, regular, informal communication and coordination regarding intellectual property issues also occurs among policy agencies in the United States and in overseas embassies and is viewed as important to the coordination process. According to government and industry officials, an interagency trade policy mechanism established by Congress has operated effectively in reviewing IPR issues (see fig. 2). In 1962, the Congress established the mechanism to assist USTR in developing policy on trade and trade-related investment, and the annual Special 301 review is conducted with this tool. Three tiers of committees constitute the principal mechanism for developing and coordinating U.S. government positions on international trade, including IPR. The Trade Policy Review Group (TPRG) and the Trade Policy Staff Committee (TPSC), administered and chaired by USTR, are the subcabinet interagency trade policy coordination groups that participate in trade policy development. More than 80 working-level subcommittees are responsible for providing specialized support for the TPSC. One of the specialized subcommittees is central to conducting the annual Special 301 review and determining the results of the review. During the 2004 review, which began early in the year, the Special 301 subcommittee met formally seven times, according to a USTR official. The subcommittee reviewed responses to a Federal Register request for information about intellectual property problems around the world; it also reviewed responses to a cable sent to every U.S. embassy soliciting specific information on IPR issues. IPR industry associations provided lengthy, detailed submissions to the U.S. government during the Special 301 review; such submissions identify IPR problems in countries around the world and are an important component in making a determination as to which countries will be cited in the final report. After reaching its own decisions on country placement, the subcommittee submitted its proposal to the Trade Policy Staff Committee. The TPSC met twice and submitted its recommendations to the TPRG for final approval. The TPRG reached a final decision via e-mail, and the results of this decision were announced with the publication of the Special 301 report on May 3, 2004. The entire process for 2004 is considered typical of how the annual process is usually conducted. In addition, this subcommittee can meet at other times to address IPR issues as appropriate. According to U.S. government and industry officials, this interagency process is rigorous and effective. A USTR official stated that the Special 301 subcommittee is very active, and subcommittee leadership demonstrates a willingness to revisit issues raised by other agencies and reconsider positions. A Commerce official told us that the Special 301 review is one of the best tools for interagency coordination in the government and that the review involves a “phenomenal” amount of communication. A Copyright Office official noted that coordination during the review is frequent and effective. A representative for copyright industries also told us that the process works well and is a solid interagency effort. The IPR Training Coordination Group, intended to inform its participants about IPR training activities and facilitate collaboration, developed a database to record and track training events, but we found that the database was incomplete. This voluntary, working-level group comprises representatives of U.S. agencies and industry associations involved in IPR programs and training and technical assistance efforts overseas or for foreign officials. Meetings are held approximately every 4 to 6 weeks and are well attended by government and private sector representatives. The State Department leads the group and supplies members with agendas and meeting minutes. Training Coordination Group meetings in 2003 and 2004 have included discussions on training “best practices,” responding to country requests for assistance, and improving IPR awareness among embassy staff. According to several agency and private sector participants, the group is a useful mechanism that keeps participants informed of the IPR activities of other agencies or associations and provides a forum for coordination. Since it does not independently control budgetary resources, the group is not responsible for sponsoring or evaluating specific U.S. government training events. One agency official noted that, partly owing to the lack of funding coordination, the training group serves more as a forum to inform others regarding already-developed training plans than as a group to actively coordinate training activities across agencies. Officials at the Department of Commerce’s Commercial Law Development Program and USPTO commented that available funds, more than actual country needs, often determine what training they are able to offer. A private sector official also voiced this concern, and several agency and industry officials commented that more training opportunities were needed. A Justice official also noted that if there were more active interagency consultations, training could be better targeted to countries that need criminal enforcement training. The Training Coordination Group helped develop a public training database, which it uses as a resource to identify planned activities and track past efforts. However, although the database has improved in recent years to include more training events and better information, it remains incomplete. Officials from the Copyright Office and USPTO stated that the database should contain records of all of their training efforts, but officials from other agencies, including the Departments of Commerce, State, and Justice, and the FBI, acknowledged that it might not record all the training events they have conducted. Although the group’s meetings help to keep the database updated by identifying upcoming training offered by members that have not been entered into the database, training activities that are not raised at the meeting or that are sponsored by embassies or an agency not in attendance may be overlooked. In addition, USAID submits training information only once per year at the conclusion of its own data-gathering exercise. Since USAID is a major sponsor of training activities—in 2002, according to the database, USAID sponsored or cosponsored nearly one- third of the total training events—the lack of timely information is notable. Several members expressed frustration that USAID does not contribute to the database regularly and inform the group about training occurring through its missions. USAID officials noted that the decentralization of their agency makes it difficult for them to address these concerns, because individual missions plan and implement training and technical assistance activities independently. The National Intellectual Property Law Enforcement Coordination Council (NIPLECC), created by the Congress in 1999 to coordinate domestic and international intellectual property law enforcement among U.S. federal and foreign entities, seems to have had little impact. NIPLECC consists of (1) the Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office; (2) the Assistant Attorney General, Criminal Division; (3) the Under Secretary of State for Economic and Agricultural Affairs; (4) the Deputy United States Trade Representative; (5) the Commissioner of Customs; and (6) the Under Secretary of Commerce for International Trade. NIPLECC is also required to consult with the Register of Copyrights on law enforcement matters relating to copyright and related rights and matters. NIPLECC’s authorizing legislation did not include the FBI as a member of NIPLECC, despite its pivotal role in law enforcement. However, according to representatives of the FBI, USPTO, and Justice, the FBI should be a member. NIPLECC, which has no independent staff or budget, is cochaired by USPTO and Justice. In the council’s nearly 4 years of existence, its primary output has been three annual reports to the Congress, which are required by statute. In its first year, according to the first annual report, NIPLECC met four times to begin shaping its agenda. It also consulted with industry and accepted written comments from the public related to what matters the council should address and how it should structure council-industry cooperation. It drafted a working paper detailing draft goals and proposed activities for the council. Goals and activities identified in the first report were “draft” only, because of the imminent change in administration. Although left open for further consideration, the matters raised in this report were not specifically addressed in any subsequent NIPLECC reports. NIPLECC’s second annual report states that the council’s mission includes “law enforcement liaison, training coordination, industry and other outreach and increasing public awareness.” In particular, the report says, the council “determined that efforts should focus on a campaign of public awareness, at home and internationally, addressing the importance of protecting intellectual property rights.” However, other than a one-page executive summary of NIPLECC’s basic mission, the body of the second annual report consists entirely of individual agencies’ submissions on their activities and details no activities undertaken by the council. NIPLECC met twice in the year between the first and second reports. The third annual report also states that “efforts should focus on a campaign of public awareness, at home and internationally, addressing the importance of intellectual property rights.” Although this is identical to the language in the previous year’s report, there is little development of the theme, and no evidence of actual progress over the course of the previous year. Like the previous year’s report, other than a single-page executive summary, the body of the report consists of individual agency submissions detailing agency efforts, not the activities or intentions of the council. The report does not provide any detail about how NIPLECC has, in its third year, coordinated domestic and international intellectual property law enforcement among federal and foreign entities. Under its authorizing legislation, NIPLECC has a broad mandate. According to interviews with industry officials and officials from NIPLECC member agencies, and as evidenced by its own legislation and reports, NIPLECC continues to struggle to define its purpose and has as yet had little discernable impact. Indeed, officials from more than half of the member agencies offered criticisms of the NIPLECC, remarking that it is unfocused, ineffective, and “unwieldy.” In official comments to the council’s 2003 annual report, major IPR industry associations expressed a sense that NIPLECC is not undertaking any independent activities or effecting any impact. One industry association representative stated that there is a need for law enforcement to be made more central to U.S. IPR efforts and said that although he believes the council was created to deal with this issue, it has “totally failed.” The lack of communication regarding enforcement results in part from complications such as concerns regarding the sharing of sensitive law enforcement information and from the different missions of the various agencies involved in intellectual property actions overseas. According to an official from USPTO, NIPLECC is hampered primarily by its lack of independent staff and funding. He noted, for example, a proposed NIPLECC initiative for a domestic and international public awareness campaign that has not been implemented owing to insufficient funds. According to a USTR official, NIPLECC needs to define a clear role in coordinating government policy. A Justice official stressed that, when considering coordination, it is important to avoid creating an additional layer of bureaucracy that may detract from efforts devoted to each agency’s primary mission. This official also commented that while NIPLECC’s stated purpose of enhancing interagency enforcement coordination has not been achieved, the shortcomings of NIPLECC should not suggest an absence of effective interagency coordination elsewhere. Despite NIPLECC’s difficulties thus far, we heard some positive comments regarding this group. For example, an official from USPTO noted that the IPR training database web site resulted from NIPLECC efforts. Further, an official from the State Department commented that NIPLECC has had some “trickle-down” effects, such as helping to prioritize the funding and development of the intellectual property database at the State Department. Although NIPLECC principals meet infrequently and NIPLECC has undertaken few concrete activities, this official noted that NIPLECC is the only forum for bringing enforcement, policy, and foreign affairs agencies together at a high level to discuss intellectual property issues. A USPTO official stated that NIPLECC has potential, but needs to be “energized.” The National IPR Coordination Center (the IPR Center) in Washington, D.C., a joint effort between DHS and the FBI, began limited operations in 2000. According to a DHS official, the coordination between DHS, the FBI, and industry and trade associations makes the IPR Center unique. The IPR Center is intended to serve as a focal point for the collection of intelligence involving copyright and trademark infringement, signal theft, and theft of trade secrets. Center staff analyze intelligence that is collected through industry referrals of complaints (allegations of IPR infringements) and, if criminal activity is suspected, provide the information for use by FBI and DHS field components. The FBI at the IPR Center holds quarterly meetings with 11 priority industry groups to discuss pressing issues on violations within the specific jurisdiction of the FBI. Since its creation, the IPR Center has received 300 to 400 referrals, according to an IPR Center official. The center is also involved in training and outreach activities. For example, according to IPR Center staff, between May 2003 and April 2004, personnel from the center participated in more than 16 IPR training seminars and conducted 22 outreach events. The IPR Center is not widely used by industry. An FBI official associated with the IPR Center estimated that about 10 percent of all FBI industry referrals come through the center rather than going directly to FBI field offices. DHS officials noted that “industry is not knocking the door down” and that the IPR Center is perceived as underutilized. An FBI official noted that the IPR Center is functional but that it generally provides training, outreach, and intelligence to the field rather than serving as a primary clearinghouse for referral collection and review. The IPR Center got off to a slow start partly because, according to an FBI official, after the events of September 11, 2001, many IPR Center staff were reassigned, and the center did not become operational until 2002. The IPR Center is authorized for 24 total staff (16 from DHS and 8 from the FBI); as of July 2004, 20 staff (13 DHS, 7 FBI) were “on board” at the center, according to an IPR Center official. This official noted that the center’s use has been limited by the fact that big companies have their own investigative resources, and not all small companies are familiar with the IPR Center. In addition to the formal coordination efforts described, policy agency officials noted the importance of informal but regular communication among staff at the various agencies involved in the promotion or protection of intellectual property overseas. Several officials at various policy- oriented agencies, such as USTR and the Department of Commerce, noted that the intellectual property community was small and that all involved were very familiar with the relevant policy officials at other agencies in Washington, D.C. One U.S. government official said, “No one is shy about picking up the phone.” Further, State Department officials at U.S. embassies also regularly communicate with Washington, D.C. agencies regarding IPR matters and U.S. government actions. Agency officials noted that this type of coordination is central to pursuing U.S. intellectual property goals overseas. Although communication between policy and law enforcement agencies can occur through forums such as the NIPLECC, these agencies do not share specific information about law enforcement activities systematically. According to an FBI official, once a criminal investigation begins, case information stays within the law enforcement agencies and is not shared. A Justice official emphasized that criminal enforcement is fundamentally different from the activities of policy agencies and that restrictions exist on Justice’s ability to share investigation information, even with other U.S. agencies. Law enforcement agencies share investigation information with other agencies on an “as-needed” basis, and a USTR official said that there is no systematic means for obtaining information on law enforcement cases with international implications. An official at USPTO commented that coordination between policy and law enforcement agencies should be “tighter” and that both policy and law enforcement could benefit from improved communication. For example, in helping other countries draft IPR laws, policy officials could benefit from information on potential law enforcement obstacles identified by law enforcement officials. Officials at the Department of State and USTR identified some formal and informal ways that law enforcement information may be incorporated into policy discussions and activities. They noted that enforcement agencies such as Justice and DHS participate in the formal Special 301 review and that officials at embassies or policy agencies consult and make use of the publicly available DHS seizure data on IPR-violating products. For example, a USTR official told us that USTR had raised seizures at U.S. borders in bilateral discussions with the Chinese. Discussions addressed time-series trends, both on an absolute and percentage basis, for the overall seizure figures available from DHS. This official noted that the agency will generally raise seizure figures with a foreign country if that country is a major violator, has consistently remained near the top of the list of violators, and/or has increasingly been the source of seized goods. In addition, a Justice official noted that the department increasingly engages in policy activities, such as the Special 301 annual review and the negotiation of free trade agreements, as well as training efforts, to improve coordination between policy and law enforcement agencies and to strengthen international IPR enforcement. The impact of U.S. activities is challenged by numerous factors. For example, internally, competing U.S. policy objectives can affect how much the U.S. government can accomplish. Beyond internal factors, the willingness of a foreign country to cooperate in improving its IPR is affected by that country’s domestic policy objectives and economic interests, which may complement or conflict with U.S. objectives. In addition, many economic factors, including low barriers to entering the counterfeiting and piracy business and large price differences between legitimate and fake goods as well as problems such as organized crime, pose challenges to U.S. and foreign governments’ efforts, even in countries where the political will for protecting intellectual property exists. Because intellectual property protection is one among many objectives that the U.S. government pursues overseas, it is viewed in the context of broader U.S. foreign policy interests where other objectives may receive a higher priority at certain times in certain countries. Industry officials with whom GAO met noted, for example, their belief that policy priorities related to national security were limiting the extent to which the United States undertook activities or applied diplomatic pressure related to IPR issues in some countries. Officials at the Department of Justice and the FBI also commented that counterterrorism, not IPR, is currently the key priority for law enforcement. Further, although industry is supportive of U.S. efforts, many industry representatives commented that U.S. agencies need to increase the resources available to better address IPR issues overseas. The impact of U.S. activities is affected by a country’s own domestic policy objectives and economic interests, which may complement or conflict with U.S. objectives. U.S. efforts are more likely to be effective in encouraging government action or achieving impact in a foreign country if support for intellectual property protection exists there. Groups in a foreign country whose interests align with that of the United States can bolster U.S. efforts. For example, combating music piracy in Brazil has gained political attention and support because Brazil has a viable domestic music industry and thus has domestic interests that have become victims of widespread piracy. Further, according to a police official in Rio de Janeiro, efforts to crack down on street vendors are motivated by the loss of tax revenues from the informal economy. The unintended effect of these local Brazilian efforts has been a crackdown on counterfeiting activities because the informal economy is often involved in selling pirated and counterfeit goods on the streets. Likewise, the Chinese government has been working with a U.S. pharmaceutical company on medicines safety training to reduce the amount of fake medicines produced in China (see fig. 3). However, U.S. efforts are less likely to achieve impact if no such domestic support exists in other nations. Although U.S. options such as removing trade preference program benefits, considering trade sanctions, or visibly publicizing weaknesses in foreign IPR protection can provide incentives for increased protection of IPR, such policies may not be sufficient alone to counter existing incentives in foreign countries. In addition, officials in some countries view providing strong intellectual property protection as an impediment to development. A Commission on Intellectual Property Rights (established by the British government) report points out that strong IPR can allow foreign firms selling to developing countries to drive out domestic competition by obtaining patent protection and to service the market through imports rather than domestic manufacture, or that strong intellectual property protection increases the costs of essential medicines and agricultural inputs, affecting poor people and farmers particularly negatively. A lack of “political will” to enact IPR protections makes it difficult for the U.S. government to achieve impact in locations where a foreign government maintains such positions. Many economic factors complicate and challenge U.S. and foreign governments’ efforts, even in countries where the political will for protecting intellectual property exists. These factors include low barriers to entering the counterfeiting and piracy business and potentially high profits for producers. For example, one industry pointed out that it is much more profitable to buy and resell software than to sell cocaine. In addition, the low prices of fake products are attractive to consumers. The economic incentives can be especially acute in countries where people have limited income. Moreover, technological advances allowing for high-quality inexpensive and accessible reproduction and distribution in some industries have exacerbated the problem. Further, many government and industry officials also believe the chance of getting caught for counterfeiting and piracy, as well as the penalties even if caught, are too low. For example, FBI officials pointed out that domestic enforcement of intellectual property laws has been weak, and consequently the level of deterrence has been inadequate. These officials said that criminal prosecutions and serious financial penalties are necessary to deter intellectual property violations. The increasing involvement of organized crime in the production and distribution of pirated products further complicates enforcement efforts. Federal and foreign law enforcement officials have linked intellectual property crime to national and transnational organized criminal operations. According to the Secretary General of Interpol, intellectual property crime is now dominated by criminal organizations, and law enforcement authorities have identified some direct and some alleged links between intellectual property crime and paramilitary and terrorist groups. Justice Department officials noted that they are aware of the allegations linking intellectual property crime and terrorist funding and that they are actively exploring all potential avenues of terrorist financing, including through intellectual property crime. However, to date, U.S. law enforcement has not found solid evidence that intellectual property has been or is being pirated in the United States by or for the benefit of terrorists. The involvement of organized crime increases the sophistication of counterfeiting operations, as well as the challenges and threats to law enforcement officials confronting the violations. Moreover, according to officials in Brazil, organized criminal activity surrounding intellectual property crime is linked with official corruption, which can pose an additional obstacle to U.S. and foreign efforts to promote enhanced enforcement. Many of these challenges are evident in the optical media industry, which includes music, movies, software, and games. Even in countries where interests exist to protect domestic industries, such as the domestic music industry in Brazil or the domestic movie industry in China, economic and law enforcement challenges can be difficult to overcome. For example, the cost of reproduction technology and copying digital media is low, making piracy an attractive employment opportunity, especially in a country where formal employment is hard to obtain. According to the Business Software Alliance, a CD recorder is relatively inexpensive (less than $1,000). The huge price differentials between pirated CDs and legitimate copies also create incentives on the consumer side. For example, when we visited a market in Brazil, we observed that the price for a legitimate DVD was approximately ten times the price for a pirated DVD. Even if consumers are willing to pay extra to purchase the legitimate product, they may not do so if the price differences are too great for similar products. We found that music companies have experimented with lowering the price of legitimate CDs in Russia and Ukraine. A music industry representative in Ukraine told us that this strategy is intended to make legitimate products really affordable to consumers. However, whether this program is successful in gaining market share and reducing sales of pirated CDs is unclear. During our visit to a large Russian marketplace, a vendor encouraged us to purchase a pirated CD despite the fact that she also had the same CD for sale under the legitimate reduced-price program. Further, the potentially high profit makes optical media piracy an attractive venture for organized criminal groups. Industry and government officials have noted criminal involvement in optical media piracy and the resulting law enforcement challenges. Recent technological advances have also exacerbated optical media piracy. The mobility of the equipment makes it easy to transport it to another location, further complicating enforcement efforts. Industry and government officials described this phenomenon as the “whack-a-mole” problem, noting that when progress is made in one country, piracy operations often simply move to a neighboring location. According to a Ukraine official, many production facilities moved to Russia after Ukraine started closing down CD plants. These economic incentives and technological developments have resulted in particularly high rates of piracy in the optical media sector. Likewise, the Internet provides a means to transmit and sell illegal software or music on a global scale. According to an industry representative, the ability of Internet pirates to hide their identities or operate from remote jurisdictions often makes it difficult for IPR holders to find them and hold them accountable. To seek improved protection of U.S. intellectual property in foreign countries, U.S. agencies make use of a wide array of tools and opportunities, ranging from routine discussions with foreign government officials, to trade sanctions, to training and technical assistance, to presidential-level dialogue. The U.S. government has demonstrated a commitment to addressing IPR issues in foreign countries using multiple agencies and U.S. embassies overseas. However, law enforcement actions are more restricted than other U.S. activities, owing to factors such as a lack of jurisdiction overseas to enforce U.S. law. U.S. agencies and industry communicate regularly, and industry provides important support for various agency activities. Although the results of U.S. efforts to secure improved intellectual property protection overseas often cannot be precisely identified, the U.S. government is clearly and consistently engaged in this area and has had a positive impact. Agency and industry officials have cited the Special 301 review most frequently as the U.S. government tool that has facilitated IPR improvements overseas. The effects of U.S. actions are most evident in strengthened foreign IPR legislation and new international obligations. Industry clearly supports U.S. efforts, recognizing that they have contributed to improvements such as strengthened IPR laws overseas. U.S. efforts are now focused on enforcement, since effective enforcement is often the weak link in intellectual property protection overseas and the situation is deteriorating for some industries. Several IPR coordination mechanisms exist, with the interagency coordination that occurs during the Special 301 process standing out as the most significant and active. Of note, the Training Coordination Group is a completely voluntary effort and is generally cited as a positive development. Further, the database created by this group is useful, although it remains incomplete. Conversely, the mechanism for coordinating intellectual property law enforcement, NIPLECC, has accomplished little that is concrete. Currently, little compelling information demonstrates a unique role for this group, bringing into question its effectiveness. In addition, it does not include the FBI, a primary law enforcement agency. Members, including NIPLECC leadership, have repeatedly acknowledged that the group continues to struggle to find an appropriate mission. As agencies continue to pursue IPR improvements overseas, they will face daunting challenges. These challenges include the need to create political will overseas, recent technological advancements that facilitate the production and distribution of counterfeit and pirated goods, and powerful economic incentives for both producers and consumers, particularly in developing countries. Further, as the U.S. government focuses increasingly on enforcement, it will face different and complex factors, such as organized crime, that may prove quite difficult to address. Because the authorizing legislation for the National Intellectual Property Law Enforcement Coordination Council (NIPLECC) does not clearly define the council’s mission, NIPLECC has struggled to establish its purpose and unique role. If the Congress wishes to maintain NIPLECC and take action to increase its effectiveness, the Congress may wish to consider reviewing the council’s authority, operating structure, membership, and mission. Such consideration could help the NIPLECC identify appropriate activities and operate more effectively to coordinate intellectual property law enforcement issues. We received technical comments from USTR, the Departments of State, Justice, and Homeland Security, the Copyright Office, and USITC. We incorporated these comments into the report as appropriate. We also received formal comment letters from the Department of Commerce (which includes comments from USPTO), the Department of Homeland Security, and USAID. USAID raised concerns regarding our findings on the agency’s contribution to an online IPR training database. No agency disagreed with our overall findings and conclusions, though all suggested several wording changes and/or additions to improve the report’s completeness and accuracy. The FBI provided no comments on the draft report. As arranged with your offices, unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to other interested committees. We will also provide copies to the Secretaries of State, Commerce, and Homeland Security; the Attorney General; the U.S. Trade Representative; the Director of the Federal Bureau of Investigation; the Director of the U.S. Patent and Trademark Office; the Register of Copyrights; the Administrator of the U.S. Agency for International Development; and the Chairman of the U.S. International Trade Commission. We will make copies available to other interested parties upon request. If you or your staff have any questions regarding this report, please call me at (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix XI. The Chairmen of the House Committees on Government Reform, International Relations, and Small Business requested that we review U.S. government efforts to improve intellectual property protection overseas. This report addresses (1) the specific efforts that U.S. agencies have undertaken; (2) the impact, and industry views, of these actions; (3) the means used to coordinate these efforts; and (4) the challenges that these efforts face in generating their intended impact. To describe agencies’ efforts, as well as the impact of these efforts, we analyzed key U.S. government intellectual property reports, such as the annual “Special 301” reports for the years 1994 through 2004, and reviewed information available from databases such as the State Department’s intellectual property training database and the Department of Homeland Security’s online database of counterfeit goods seizures. To assess the reliability of the online Department of Homeland Security seizure data (www.cbp.gov/xp/cgov/import/commercial_enforcement/ipr/seizure/), we interviewed the officials responsible for collecting the data and performed reliability checks on the data. Although we found that the agency had implemented a number of checks and controls to ensure the data’s reliability, we also noted some limitations in the precision of the estimates. However, we determined that the data were sufficiently reliable to provide a broad indication of the major products seized and the main country from which the seized imports originated. Our review of the reliability of the State Department’s training database is described below as part of our work to review agency coordination. While we requested a comprehensive listing of countries assessed and GSP benefits removed due to IPR problems, USTR was unable to provide us with such data because this information is not regularly collected. We met with officials from the Departments of State, Commerce, Justice, and Homeland Security; the Office of the U.S. Trade Representative (USTR); the U.S. Patent and Trademark Office (USPTO); the Copyright Office; the Federal Bureau of Investigation (FBI); the U.S. International Trade Commission (USITC); and the U.S. Agency for International Development (USAID). We also met with officials from the following industry groups that address intellectual property issues: the International Intellectual Property Alliance, the International AntiCounterfeiting Coalition, the Motion Picture Association of America, the Recording Industry Association of America, the Entertainment Software Association, the Association of American Publishers, the Software and Information Industry Association, the International Trademark Association, the Pharmaceutical Research and Manufacturers of America, and the National Association of Manufacturers. We reviewed reports and testimonies that such groups had prepared. In addition, we attended a private sector intellectual property rights enforcement conference and a U.S. government training session sponsored by USPTO and the World International Property Organization (WIPO). We met with officials from the World Trade Organization (WTO) and WIPO in Geneva, Switzerland, to discuss their interactions with U.S. agency officials. We reviewed literature modeling trade damages due to intellectual property violations and, in particular, examined the models used to estimate such losses in Ukraine, which has been subject to U.S. trade sanctions since 2002. We met with officials to discuss the methodologies and processes employed in the Ukraine sanction case. To identify the impact of trade sanctions against Ukraine, we studied the U.S. overall imports from Ukraine as well as imports of commodities on the sanction list from Ukraine from 2000 to 2003. Finally, to verify information provided to us by industry and agency officials and obtain detailed examples of U.S. government actions overseas and the results of those actions, we traveled to four countries where serious IPR problems have been identified—Brazil, China, Russia, and Ukraine—and where the U.S. government has taken measures to address these problems. We met with U.S. embassy and foreign government officials and with U.S. companies and industry groups operating in those countries. To choose the case study countries, we evaluated countries according to a number of criteria that we established, including the extent of U.S. government involvement; the economic significance of the country and seriousness of the intellectual property problem; the coverage of key intellectual property areas (patent, copyright, and trademark) and industries (e.g., optical media, pharmaceuticals); and agency and industry association recommendations. We collected and reviewed U.S. government and industry documents in these countries. To describe and assess the coordination mechanisms for U.S. efforts to address intellectual property rights (IPR) overseas, we identified formal coordination efforts (mandated by law, created by executive decision, or occurring and documented on a regular basis) and reviewed documents describing agency participation, mission, and activities. We interviewed officials from agencies participating in the Special 301 subcommittee of the Trade Policy Staff Committee, the National Intellectual Property Law Enforcement Coordination Council, the IPR Training Coordination Group, and the IPR Center. While USTR did provide GAO with a list of agencies that participated in Special 301 subcommittee meetings during the 2004 review, USTR officials requested that we not cite this information in our report on the grounds that this information is sensitive. USTR asked that we instead list all the agencies that are invited to participate in the TPSC process, though agency officials acknowledged that, based upon their own priorities, not all agencies actually participate. We also met with officials from intellectual property industry groups who participate in the IPR Training Coordination Group and who are familiar with the other agency coordination efforts. We attended a meeting of the IPR Training Coordination Group to witness its operations, and we visited the IPR Center. To further examine the coordination of agency training efforts, we conducted a data reliability assessment of the IPR Training Database (www.training.ipr.gov) to determine whether it contained an accurate and complete record of past and planned training events. To assess the completeness and reliability of the training data in the database, we spoke with officials at the Department of State about the management of the database and with officials at the agencies about the entering of the data in the database. We also conducted basic tests of the data’s reliability, including checking to see whether agencies input information related to training events in the database and information appeared accurate. We assessed the reliability of these data to determine how useful they are to the agencies that provide IPR training, not because we wanted to include them in this report. As noted on pages 34 and 35, we determined that these data had some problems of timeliness and completeness, which limited their usefulness. Finally, we compared the data with documents containing similar information, provided by some of the agencies, to check the data’s consistency. To identify other forms of coordination, we spoke with U.S. agency officials about informal coordination and communication apart from the formal coordination bodies cited above. To identify the challenges that agencies’ activities face in generating their intended impact, we spoke with private sector and embassy personnel in the case study countries about political and economic circumstances relevant to intellectual property protection and the impact of these circumstances on U.S. activities. We also spoke with law enforcement personnel at the Departments of Justice and Homeland Security, the FBI, and foreign law enforcement agencies in Washington, D.C., and our case study countries about the challenges they face in combating intellectual property crime overseas. We visited markets in our case study countries where counterfeit and pirated merchandise is sold to compare local prices for legitimate and counterfeit products and to confirm (at times with industry experts present) that counterfeit goods are widely and easily available. We reviewed embassy cables, agency and industry reports, and congressional testimony provided by agency, industry, and overseas law enforcement officials documenting obstacles to progress in IPR protection around the world. We reviewed studies and gathered information at our interviews on the arguments for and against IPR protection in developing countries. In addition to the general discussion, we chose the optical media sector to illustrate the challenges facing antipiracy efforts. To identify the challenges, we interviewed industry representatives from the optical media sector both in the United States and overseas regarding their experiences in fighting piracy. We reviewed Special 301 reports and industry submissions to study the optical media piracy levels over the years. In Brazil, Russia, and Ukraine, we recorded the prices of legal and illegal music CDs, movies, and software at local markets. We used U.S. overall imports and import of the products on the sanction list from Ukraine. The source of the overall import data is the U.S. Bureau of the Census, and the source of the import data of the products on the sanction list is the Trade Policy Information System (TPIS), a Web site operated by the Department of Commerce. In order to assess the reliability of the overall import data, we (1) reviewed “U.S. Merchandise Trade Statistics: A Quality Profile” by the Bureau of the Census and (2) discussed the data with the Chief Statistician at GAO. We determined the data to be sufficiently reliable for our purpose, which was to track the changes in U.S. overall imports from Ukraine from 2000 through 2003. In order to assess the reliability of the data from TPIS, we did internal checks on the data and checked the data against a Bureau of the Census publication. We determined the data to be sufficiently reliable for our purpose, which was to track changes in U.S. imports from Ukraine of the goods on the sanction list. We conducted our work in Washington, D.C.; Geneva, Switzerland; Brasilia, Rio de Janeiro, and Sao Paolo, Brazil; Beijing, China; Moscow, Russia; and Kiev, Ukraine, from June 2003 through July 2004, in accordance with generally accepted government auditing standards. Turkmenistan Agreement on Trade Relations North American FTA (Mexico and Canada) Since the implementation of the WTO Agreement on Trade-Related Aspects of Intellectual Property (TRIPS) in 1996, the United States has brought a total of 12 TRIPS-related cases against 11 countries and the European Community (EC) to the WTO through that organization’s dispute settlement mechanism (see below). Of these, 8 cases were resolved by mutually agreed solutions. In nearly all of these cases, U.S. concerns were addressed via changes in laws or regulations by the other party. Only 2 (involving Canada and India) have resulted in the issuance of a panel report, both of which were favorable rulings for the United States. Consultations are ongoing in one additional case, against Argentina, and this case has been partially settled. One case, involving an EC regulation protecting geographical indications, has gone beyond consultations and is in WTO dispute settlement panel proceedings. 1. Argentina: pharmaceutical patents — Brought by U.S., DS171 and DS196 Case originally brought by the United States in May 1999. Consultations ongoing, although 8 of 10 originally disputed issues have been resolved. 2. Brazil: “local working” of patents and compulsory licensing — Brought by U.S., DS199 Case originally brought by the United States in June 2000. Settled between the parties in July 2001. Brazil agreed to hold talks with the United States prior to using the disputed article against a U.S. company. 3. Canada: term of patent protection — Brought by U.S., DS170 Case originally brought by the United States in May 1999. Panel report issued in May 2000 decided for the United States, (WT/DS170/R) later upheld by Appellate Body report. According to USTR, Canada announced implementation of a revised patent law on July 24, 2001. 4. Denmark: enforcement, provisional measures, civil proceedings — Brought by U.S., DS83 Case originally brought by United States in May 1997. Settled between the parties in June 2001. In March 2001, Denmark passed legislation granting the relevant judicial authorities the authority to order provisional measures in the context of civil proceedings involving the enforcement of intellectual property rights. 5. EC: trademarks and geographical indications — Brought by U.S., DS174 Case originally brought by U.S. in June 1999. WTO panel proceedings are ongoing. 6. Greece and EC: motion pictures, TV, enforcement — Brought by U.S., DS124 and DS125 Case originally brought by the United States in May 1998. Greece passed a law in October 1998 that provided an additional enforcement remedy for copyright holders whose rights were infringed upon by TV stations in Greece. Based on the implementation of this law, the case was settled between the parties in March 2001. 7. India: patents, “mailbox,” exclusive marketing — Brought by EC, DS79 — Brought by U.S., DS50 Case originally brought by the United States in July 1996. Panel report issued in September 1997 decided for the United States (WT/DS50/R). 8. Ireland and EC: copyright and neighbouring rights — Brought by U.S., DS82 and DS115 Case originally brought by the United States in May 1997. Settled between the parties in November 2000. Ireland passed a law and amended its copyright law in ways that satisfied U.S. concerns. 9. Japan: sound recordings intellectual property protection — Brought by EC DS42 — Brought by U.S., DS28 Case originally brought by the United States in February 1996. Settled between the parties in January 1997. Japan passed amendments to its copyright law that satisfied U.S. concerns. 10. Pakistan: patents, “mailbox,” exclusive marketing — Brought by U.S., DS36 Case originally brought by the United States in May 1996. Settled between the parties in February 1997. Pakistan issued rulings with respect to the filing and recognition of patents that satisfied U.S. concerns. 11. Portugal: term of patent protection — Brought by U.S., DS37 Case originally brought by the United States in May 1996. Settled between the parties in October 1996. Portugal issued a law addressing terms of patent protection in a way that satisfied U.S. concerns. 12. Sweden: enforcement, provisional measures, civil proceedings — Brought by U.S., DS86 Case originally brought by the United States in June 1997. Settled between the parties in December 1998. In November 1998, Sweden passed legislation granting the relevant judicial authorities the authority to order provisional measures in the context of civil proceedings involving the enforcement of intellectual property rights. Brazil is generally credited with having adequate laws to protect intellectual property, but the enforcement of these laws remains a problem. Officials we interviewed in Brazil identified several reasons for the weak enforcement, including insufficient and poorly trained police and a judiciary hampered by a lack of resources, inefficiencies and, in some cases, corruption. Most broadly, they cited the weak economy and lack of formal sector employment as reasons for the widespread sale and consumption of counterfeit goods. One Brazilian official commented that the current intellectual property protection system has generated large price gaps between legitimate and illegitimate products, making it very difficult to combat illegitimate products. However, private sector officials also pointed to high tax rates on certain goods as a reason for counterfeiting. Regardless, the sale of counterfeit merchandise abounds. One market in Sao Paulo that we visited covered many city blocks and was saturated with counterfeit products. For example, we identified counterfeit U.S. products such as Nike shoes, Calvin Klein perfume, and DVDs of varying quality. The market not only sold counterfeit products to the individual consumer, but many vendors also served as “counterfeit wholesalers” who offered even cheaper prices for purchasing counterfeit sunglasses in bulk, for example. According to industry representatives, this market also has ties to organized crime. Private and public sector officials identified two significant challenges to Brazil’s improving its intellectual property protection: establishing better border protection, particularly from Paraguay—a major source of counterfeit goods—and a better-functioning National Industrial Property Institute (INPI). The acting president of INPI acknowledged that, owing to insufficient personnel, money, and space, INPI is not functioning well and has an extremely long backlog of patent and trademark applications. Two private sector representatives commented that U.S. assistance to INPI could be very valuable. It can currently take as long as 9 years to get a patent approved. Patent problems have been exacerbated by an ongoing conflict between INPI and the Ministry of Health over the authority to grant pharmaceutical patents. A pharmaceutical industry association report claims that the current system, which requires the Ministry of Health to approve all pharmaceutical patents, is in violation of TRIPS. The U.S. government has been involved in various activities to promote better enforcement of intellectual property rights in Brazil. Brazil has been cited on the Special 301 Priority Watch List since 2002 and is currently undergoing a review to determine whether it should remain eligible for Generalized System of Preferences (GSP) benefits. In recent years, Brazilian officials have participated in training offered by USPTO in Washington, D.C., and have studied intellectual property issues in depth in the United States as participants in U.S.-sponsored programs. The Departments of State, Justice, and Homeland Security have also sponsored or participated in training events or seminars on different intellectual property issues. The Department of State’s public affairs division has also worked on public awareness events and seminars. Officials from industry associations representing American companies, as well as officials from individual companies we met with, stated that they are generally satisfied with U.S. efforts to promote the protection of IPR in Brazil. Many had regular contact with embassy personnel to discuss intellectual property issues, and several had collaborated with U.S. agencies to develop and present seminars or training events in Brazil that they believed were useful tools for promoting IPR. The private sector officials we spoke with made some suggestions for improving U.S.- sponsored assistance, including consulting with the private sector earlier to identify appropriate candidates for training. However, private and public sector officials commented regularly on the usefulness of training activities provided by the United States, and many expressed a desire for more of these services. In particular, several officials expressed a hope that the United States would provide training and technical assistance to INPI. In February 2004, a senior Department of Commerce official discussed collaboration and technical assistance matters with a Brazilian minister, and USPTO staff recently traveled to Brazil to provide training at INPI. Overall, the direct impact of U.S. efforts was difficult to determine, but U.S. involvement regarding IPR in Brazil was widely recognized. Several industry and Brazilian officials we spoke with were familiar with the Special 301 report; many in the private sector had contributed to it via different mechanisms. One industry official commented that the Special 301 process is helpful in convincing the Brazilian authorities of the importance of intellectual property protection. Others were less certain about whether the report had any impact. A Brazilian minister stated that the United States is the biggest proponent of IPR, although he did not believe that any particular U.S. program had had a direct impact on Brazilian intellectual property laws or enforcement. Others, however, believed that pressure from the U.S. government lent more credibility to the private sector’s efforts and may have contributed to changes in Brazilian intellectual property laws. Most private sector officials we spoke with agreed that the government’s interest in combating intellectual property crime has recently increased. They noted that developments have included the work of the Congressional Investigative Commission on Piracy (CPI) in the Brazilian Congress and newly formed special police groups to combat piracy. In addition, President Lula signed a law last year amending the penal code with respect to copyright violations; minimum sentences were increased to 2 years and now include a fine and provide for the seizure and destruction of counterfeit goods. However, these increased sanctions do not apply to software violations. According to an official with the Brazilian special police, the Brazilian government was moved to prosecute piracy more vigorously because government officials realized that the growing informal economy was resulting in the loss of tax revenue and jobs. However, a Brazilian state prosecutor and the CPI cited corruption and the involvement of organized crime in intellectual property violations as challenges to enforcement efforts. China’s protection of IPR has improved in recent years but remains an ongoing concern for the U.S. government and the business community. Upon accession to the WTO in December 2001, China was obligated to adhere to the terms of the Agreement on Trade-Related Aspects of Intellectual Property (TRIPS). According to the U.S. Trade Representative’s (USTR) 2003 review of China’s compliance with its WTO commitments, IPR enforcement was ineffective, and IPR infringement continued to be a serious problem in China. USTR reported that lack of coordination among Chinese government ministries and agencies, local protectionism and corruption, high thresholds for criminal prosecution, lack of training, and weak punishments hampered enforcement of IPR. Piracy rates in China continue to be excessively high and affect products from a wide range of industries. According to a 2003 report by China’s State Council’s Development Research Center, the market value of counterfeit goods in China is between $19 billion and $24 billion. Various U.S. copyright holders also reported that estimated U.S. losses due to the piracy of copyrighted materials have continued to exceed $1.8 billion annually. Pirated products in China include films, music, publishing, software, pharmaceuticals, chemicals, information technology, consumer goods, electric equipment, automotive parts, and industrial products, among many others. According to the International Intellectual Property Alliance, a coalition of U.S. trade associations, piracy levels for optical discs are at 90 percent and higher, almost completely dominating China’s local market. Furthermore, a U.S. trade association reported that the pharmaceutical industry not only loses roughly 10 to 15 percent of annual revenue in China to counterfeit products, but counterfeit pharmaceutical products also pose serious health risks. Since the first annual Special 301 review in 1989, USTR has initiated several Special 301 investigations on China’s IPR protection. However, since the conclusion of a bilateral IPR agreement with China in 1996, China has not been subject to a Special 301 investigation but has instead been subject to monitoring under Section 306. In 2004, USTR reviewed China’s implementation under Section 306 and announced that China would be subject to an out-of-cycle review in 2005. In addition to addressing China’s IPR protection through these statutory mechanisms, the U.S. government has been involved in various efforts to protect IPR in China. The U.S. government’s activities in China are part of an interagency effort involving several agencies, including USTR, State, Commerce, Justice, Homeland Security, USPTO, and the Copyright Office. In 2003, U.S. interagency actions in China to protect IPR included (1) engaging the Chinese government at various levels on IPR issues; (2) providing training and technical assistance for Chinese ministries, agencies, and other government entities on various aspects of IPR protection; and (3) providing outreach and assistance to U.S. businesses. Most private sector representatives we met with in China said that they are generally satisfied with the U.S. government’s efforts in China but noted areas for potential improvement. In 2003, U.S. government engagement with China on IPR issues ranged from high-level consultations with Chinese ministries to letters, demarches, and informal meetings between staff-level U.S. officials and their counterparts in the Chinese government. U.S. officials noted that during various visits to China in 2003, the Secretaries of Commerce and Treasury and the U.S. Trade Representative, as well as several subcabinet level officials, urged their Chinese counterparts to develop greater IPR protection. U.S. officials said that these efforts were part of an overall strategy to ensure that IPR protection was receiving attention at the highest levels of China’s government. U.S. officials also noted that the U.S. Ambassador to China has placed significant emphasis on IPR protection. In 2002 and 2003, the U.S. government held an Ambassador’s Roundtable on IPR in China that brought together representatives from key U.S. and Chinese agencies, as well as U.S. and Chinese private sector representatives. U.S. officials said that China Vice Premier Wu’s involvement in the 2003 roundtable was an indication that IPR was receiving attention at high levels of China’s government. One U.S. official stated that addressing pervasive systemic problems in China, such as lack of IPR protection, is “nearly impossible unless it stays on the radar at the highest levels” of the Chinese government. A second key component of U.S. government efforts to ensure greater protection of IPR in China involved providing numerous training programs and technical assistance to Chinese ministries and agencies. U.S. government outreach and capacity-building efforts included sponsoring speakers, seminars, and training on specific technical aspects of IPR protection to raise the profile and increase technical expertise among Chinese officials. The U.S. government targeted other programs to address the lack of criminalization of IPR violations in China. For example, an interagency U.S. government team (Justice, DHS, and Commerce) conducted a three-city capacity-building seminar in October 2003 on criminalization and enforcement. The program was cosponsored by the Chinese Procuratorate, the Chinese government’s prosecutorial arm. U.S. government officials noted that the program was unique because the seminar brought together officials from Chinese criminal enforcement agencies, including customs officials, criminal investigators, and prosecutors, as well as officials from administrative enforcement agencies. In March 2004, the Copyright Office hosted a week-long program for a delegation of Chinese copyright officials that provided technical assistance and training on copyright-related issues, including the enforcement of copyright laws, as well as outreach and relationship-building. The U.S. government has also provided outreach regarding IPR protection to U.S. businesses in China, and Commerce has played a lead role in this effort. For example, in late 2002, Commerce established a Trade Facilitation Office in Beijing to, among other things, provide outreach, advocacy, and assistance to U.S. businesses on market access issues, including IPR protection. Additionally, Foreign Commercial Service officers in China work with U.S. firms to identify and resolve cases of IPR infringement. Commerce officials indicated that increasing private sector awareness and involvement in IPR issues are essential to furthering IPR protection in China. GAO’s 2004 analysis of selected companies’ views on China’s implementation of its WTO commitments reported that respondents ranked IPR protection as one of the three most important areas of China’s WTO commitments but that most respondents thought China had implemented IPR reforms only to some or little extent. In general, other industry association and individual company representatives whom we interviewed in China were satisfied with the range of U.S. government efforts to protect IPR in China. Several industry representatives noted that they had regular contact with officials from various U.S. agencies in China and that the staff assigned to IPR issues were generally responsive to their firm’s or industry’s needs. Private sector representatives stated that the U.S. government’s capacity-building efforts were one of the most effective ways to promote IPR protection in China. Some representatives noted that Chinese government entities are generally very receptive to these types of training and information-sharing programs. However, some private sector representatives also said that the U.S. agencies could better target the programs to the appropriate Chinese audiences and follow up more to ensure that China implements the knowledge and practices disseminated through the training programs. Most private sector representatives we met with also said that the U.S. government efforts in China were generally well coordinated, but they indicated that they were not always able to determine which U.S. agency was leading the effort on a specific issue. Although Chinese laws are now, in principle, largely compliant with the strict letter of the TRIPS agreement, U.S. government and other industry groups note that there are significant gaps in the law and enforcement policies that pose serious questions regarding China’s satisfaction of the TRIPS standards of effective and deterrent enforcement. In 2003, USTR found that China’s compliance with the TRIPS agreement had been largely satisfactory, although some improvements still needed to be made. Before its accession to the WTO, China had completed amendments to its patent law, trademark law, and copyright law, along with regulations for the patent law. Within several months after its accession, China issued regulations for the trademark law and copyright law. China also issued various sets of implementing rules, and it issued regulations and implementing rules covering specific subject areas, such as integrated circuits, computer software, and pharmaceuticals. China has taken some steps in administrative, criminal, and civil enforcement against IPR violators. According to USTR’s review, the central government promotes periodic anticounterfeiting and antipiracy campaigns as part of its administrative enforcement, and these campaigns result in a high number of seizures of infringing materials. However, USTR notes that the campaigns are largely ineffective; because cases brought by the administrative authorities usually result in extremely low fines, criminal enforcement has virtually no deterrent effect on infringers. China’s authorities have pursued criminal prosecutions in a small number of cases, but the Chinese government lacks the transparency needed to determine the penalties imposed on infringers. Last, China has seen an increased use of civil actions being brought for monetary damages or injunctive relief. This suggests an increasing sophistication on the part of China’s IPR courts, as China continues to make efforts to upgrade its judicial system. However, U.S. companies complain that the courts do not always enforce China’s IPR laws and regulations consistently and fairly. Despite the overall lack of IPR enforcement in China, IPR protection is receiving attention at high levels of the Chinese government. Notably, in October 2003, the government created an IPR Leading Group, headed by a vice premier, to address IPR protection in China. Several U.S. government officials and private sector representatives told us that high-level involvement by Vice Premier Wu would be critical to the success of future developments in IPR protection in China. In April 2004, the United States pressed IPR issues with China during a formal, cabinet-level consultative forum with China called the Joint Commission of Commerce and Trade (JCCT). In describing the results of the April 2004 JCCT meeting, USTR reported that China had agreed to undertake a number of near-term actions to address IPR protection. China’s action plan included increasing penalties for IPR infringement and launching a public awareness campaign on IPR protection. Additionally, China and the United States agreed to form an IPR working group under the JCCT to monitor China’s progress in implementing its action plan. Although the Russian government has demonstrated a growing recognition of the seriousness of IPR problems in the country and has taken some actions, serious problems persist. Counterfeiting and piracy are common (see fig. 4). For example, a Microsoft official told us that approximately 80 percent of business software is estimated as pirated in Russia, and that the Russian government is a “huge” user of pirated software. Further, the pharmaceutical industry estimates that up to 12 percent of drugs on the market in Russia are counterfeit. Of particular note to the U.S. government, piracy of optical media (e.g., CDs, DVDs, etc.) in Russia is rampant. According to an official from the Russian Anti-Piracy Organization, as much as 95 percent of optical media products produced in Russia are pirated. U.S. concern focuses on the production of pirated U.S. optical media products by some or all of the 30 optical media production facilities in Russia, 17 of which are located on Russian government-owned former defense sites where it has been difficult for inspection officials to gain access (though, according to an embassy official, access has recently improved). According to a U.S. embassy official, Russian demand for optical media products is estimated at 18 million units per year, but Russian production is estimated to be 300 million units. U.S. Embassy and private sector officials believe that the excess pirated products are exported to other countries. Industry estimates losses of over $1 billion annually as a result of this illegal activity. Russia has made many improvements to its IPR legislation, but the U.S. government maintains that more changes are needed. For example, the 2004 Special 301 report states that the Russian government is still working to amend its laws on protection of undisclosed information—in particular, protection for undisclosed test data submitted to obtain marketing approval for pharmaceuticals and agricultural chemicals. Further, U.S. industry and Russian officials view Russia’s IPR enforcement as inadequate and cite this as the largest deterrent to effective IPR protection in Russia. For example, the 2004 Special 301 report emphasizes that border enforcement is considered weak and that Russian courts do not have the authority in criminal cases to order forfeiture and destruction of machinery and materials used to make pirated and counterfeit products. Further, one Russian law enforcement official told us that since IPR crimes are not viewed as posing much of a social threat, IPR enforcement is “pushed to the background” by Russian prosecutors. The U.S. government has taken several actions in Washington, D.C., and Moscow to address its concerns over Russia’s failure to fully protect IPR. Russia has been placed on USTR’s Special 301 Priority Watch List for the past 8 years (1997 through 2004). Further, a review of Russia’s eligibility under the Generalized System of Preferences (GSP) is underway owing to concerns over serious IPR problems in the country. The U.S. government has actively raised IPR issues with the Russian government, including at the highest levels. According to the Department of State, at a United States–Russia summit in September 2003, President Bush raised IPR concerns with Russian President Putin. Further, in Moscow, the U.S. Ambassador to Russia considers IPR an embassy priority and has sent letters to Russian government officials and published articles in the Russian press that outline U.S. government concerns. Many agencies resident in the U.S. Embassy in Moscow are engaged in IPR issues. The Department of State’s Economic Section is the Embassy office with primary responsibility for IPR issues. This office collaborates closely with USTR and holds interagency embassy meetings to coordinate on IPR efforts. In addition to interagency communication through these meetings, each agency is also engaged in separate efforts. For example, the Economic Section has met regularly with Russian government officials to discuss IPR issues. Justice has held two training events on IPR criminal law enforcement in 2004, and has two more events planned for this year, while the Embassy’s Public Affairs Office is involved with IPR enforcement exchange and training grants. Further, the Department of Commerce’s Foreign Commercial Service works with U.S. companies on IPR issues and sponsored a 2003 seminar on pharmaceutical issues, including IPR-related topics. According to a Justice official, U.S. law enforcement agencies are making efforts to build relationships with their Russian counterparts. Industry representatives whom we interviewed in Moscow expressed support for U.S. government efforts to improve intellectual property protection, particularly the U.S. Ambassador’s efforts to increase the visibility of IPR problems. An official from one IPR association in Moscow noted, with respect to USTR’s efforts in Russia, “No other country in the world is so protective of its copyright industries.” Industry representatives noted that the U.S. government has played an important role in realizing IPR improvements in Russia, although the Russian government is also clearly motivated to strengthen intellectual property protections as part of its preparation for joining the World Trade Organization. Further, U.S. Embassy staff believe that they have been successful in ensuring that IPR is now firmly on the “radar screen” of the Russian government. According to U.S. sources, numerous IPR laws have been enacted. For example, the Department of State has noted that the Russian government has passed new laws on patents, trademarks, industrial designs, and integrated circuits and has amended its copyright law. Further, U.S. and Russian sources note that Russia has improved its customs and criminal codes. Moreover, in 2002, the Russian government established a high-level commission, chaired by the prime minister, specifically to address intellectual property problems (although, despite a recognized desire to address IPR enforcement, the commission has reportedly not accomplished a great deal in terms of concrete achievements). In addition to these promising improvements, there have been some signs that enforcement is improving, if slowly. For example, the Russian government issued a decree banning the sale of audio and video products by Russian street vendors, and the U.S. Embassy has reported that subsequently several kiosks known to sell pirated goods were closed. Industry associations have reported that law enforcement agencies are generally willing to cooperate on joint raids, and in 2003 several large seizures were made as a result of such raids. Further, in February 2004 the Russian Anti-Piracy Organization reported that police raids involving optical media products took place almost daily all over Russia and were covered widely on national TV channels. In addition, according to the U.S. Embassy, the consumer products industry reports progress in reducing the amount of counterfeit consumer goods on the Russian market, and one major U.S. producer even claims that it has virtually eliminated counterfeiting of all its consumer goods lines. Finally, according to a U.S. Embassy official, the first prison sentence was handed down during the summer of 2004 for an IPR violator who had been manufacturing and distributing pirated DVDs. U.S. and Russian officials have identified several problems that the Russian government faces in implementing effective IPR protection in the future. Issues identified include: (1) the price of legitimate products is too high for the majority of Russians, who have very modest incomes; (2) Russian citizens and government officials are still learning about the concept of private IPR—a Russian Ministry of Press official pointed out that until the dissolution of the Soviet Union, all creations belonged to the state, and the general public and the government didn’t understand the concept of private IPR; and (3) corruption and organized crime make the effective enforcement of IPR laws difficult. Ukraine has been the subject of intense industry and U.S. government concern since 1998 owing primarily to the establishment of pirate optical media plants that produced music, video discs, and software for the Ukraine market and for export to other countries. This followed the crackdown on pirate plants in Bulgaria in 1998 that resulted in many of these manufacturers relocating to Ukraine. Regarding Ukraine, USTR cites U.S. music industry losses of $210 million in revenues in 1999, while the Motion Picture Association reported losses of $40 million. The international recording industry association estimated that the production capacity of optical media material was around 70 million units per year and the demand within Ukraine for legitimate CD was fewer than 1 million units in 2000. Further the audio and video consumer market in Ukraine has consisted overwhelmingly of pirated media. For example, in 2000, the international recording industry association estimated that 95 percent of products on the market were pirated. Further, USTR and industry cite significant counterfeiting of name brand products, pharmaceuticals, and agricultural chemicals. By 2004, IPR in Ukraine has shown improvement in several areas, although the digital media sold in the consumer retail market remain predominantly pirated. The production of such digital media in local plants has ended however, according to U.S. government and industry officials in Kiev. Further, U.S. officials noted Ukraine’s accession to key WIPO conventions and improvements in intellectual property law that represents progress in fulfilling TRIPS requirements as part of Ukraine’s WTO accession process. Remaining areas of concern regarding U.S. IPR are inadequacies in the existing optical media licensing law and the fact that Ukraine remains a key transit country for pirated products. Other areas of concern are the prevalence of pirated digital media products in the consumer retail markets, lack of law enforcement actions, and the use of illegal software by government agencies (although this situation has also improved). U.S. industry and government now seek certain amendments to intellectual property laws and better enforcement efforts, including border controls to prevent counterfeit and pirated products from entering the Ukrainian domestic retail market. The U.S. government has undertaken concerted action in Washington and Kiev to address its concerns regarding the state of intellectual property protection in Ukraine. With the emergence of serious music and audio- visual piracy, Ukraine was placed on USTR’s Special 301 Watch list in 1998. Ukraine was elevated to USTR’s Special 301 Priority Watch list for 2 years, in 1999 and 2000. In June 2000, during President Clinton’s state visit to Kiev, he and President Kuchma endorsed a U.S.-Ukrainian joint action plan to combat optical media piracy. However, slow and insufficient response by Ukraine led to its designation as a Priority Foreign Country in 2001 and to the imposition of punitive economic sanctions (100 percent duties) against Ukrainian exports to the United States valued at $75 million in 2002. The Priority Foreign Country designation remains in place. The sanctions affect a number of Ukrainian exports, including metal products, footwear, and chemicals. In addition, a U.S. government review of Ukraine’s eligibility for preferential tariffs under the GSP program was undertaken, and Ukraine’s benefits under this program were suspended in August 2001. GSP benefits have not been reinstated. In Kiev, intellectual property issues remain a priority for the U.S. Embassy, including the U.S. Ambassador. A State Department economic officer has been assigned responsibility as the focal point for such issues and has been supported in this role by the actions of other U.S. agencies. The Commercial Law Center, funded by USAID, and the Commercial Law Development Program of the U.S. Department of Commerce have provided technical advice to Ukraine as it crafted intellectual property laws. A U.S. private sector association reported that it had worked closely with USAID on projects related to commercial law development. Ukrainian legislative officials reported that training opportunities and technical assistance provided by the United States had facilitated the creation of IP legislation. Training is also focused on enforcement, including training of a Ukrainian judicial official by USPTO in Washington, D.C., during 2003. The State Department has trained police and plans further police training in Ukraine during 2004. Further, Department of Commerce officials maintain contact with U.S. firms and collect information on intellectual property issues for State and USTR. Ukraine has made improvements in its legal regime for IPR protection. According to Ukrainian officials, Ukraine passed a new criminal code with criminal liability for IPR violations, as well as a new copyright law. Ukrainian officials report that the laws are now TRIPS compliant. U.S. government documents show that Ukraine implemented an optical disk law in 2002, although it was deemed “unsatisfactory,” and sanctions remain in place based on Ukraine’s failure to enact and enforce adequate optical disk media licensing legislation. In addition, Ukraine has pursued enforcement measures to combat counterfeiting, although enforcement overall is still considered weak. USTR reported that administrative and legal pressure by the Ukrainian government led to the closure of all but one of the major pirate CD plants. Some pirate plants moved to neighboring countries. According to U.S. and private sector officials in Kiev, remaining optical plants have switched to legitimate production. However, pirated optical media are still prevalent in Ukraine, imported from Russia and elsewhere, with little effort to remove them from the market. In a visit to the Petrovska Market in Kiev, we found a well-organized series of buildings where vendors sold movies, music, software, and computer games from open-air stands. The price for a pirated music CD was $1.50, compared to legitimate CDs that were sold for almost $20 in a music store located near the market. According to USTR, Ukraine is a major trans-shipment point and storage location for illegal optical media produced in Russia and elsewhere. A Ukrainian law enforcement official reported that the number of IPR crimes detected has risen from 115 in 2001 to 374 in 2003. He noted that to date, judges have been reluctant to impose jail time, but had used fines that are small compared to the economic damages. A U.S. government official also reported that the fines are too small to be an effective deterrent. While one U.S. company told us about the lack of Ukrainian government actions regarding specific IPR enforcement issues, a large U.S. consumers goods company told us that consumer protection officials and tax police had worked with it to reduce counterfeit levels of one product line from approximately 40 percent in 1999 to close to zero percent 16 months later. The company provided 11 laboratory vans as well as personnel that could accompany police to open markets and run on-the-spot tests of products. The following are GAO’s comments on the Department of Commerce’s letter dated August 20, 2004. 1. We have reviewed the report to ensure that the term “counterfeiting” is used to refer to commercial-scale trademark-related infringements of a good or product and the term “piracy” is used to refer to commercial- scale infringements of copyright-protected works. 2. While we do not discuss “advocacy” separately in this report, this type of effort has been addressed in the policy initiatives section of the report, specifically in the discussion entitled “U.S. Officials Undertake Diplomatic Efforts to Protect Intellectual Property” (see p. 18). We note that U.S. government officials overseas, including officials from the Department of Commerce, work with U.S. companies and foreign governments to address specific IPR problems. We have also included a particular example involving Department of Commerce efforts to resolve problematic issues related to proposed Mexican legislation that involved the pharmaceutical industry. We have also added another reference to advocacy efforts on page 27. 3. We chose to emphasize IPR-specific agreements, bilateral trade agreements, and free trade agreements in our report (discussion entitled “U.S. Government Engages in IPR-Related Trade Negotiations”) because USTR officials consistently cited these agreements as central components of their IPR efforts. However, we do note the negotiation of trade and investment framework agreements in footnote 24 of the report. 4. The efforts of the Department of Commerce’s International Trade Administration (ITA) are cited in our report. The report does not specifically list the ITA, as we intentionally kept the discussion for all government entities at the “departmental” level (with a few exceptions for entities that have distinct responsibilities, such as the FBI and USPTO) without mentioning the numerous bureaus and offices involved for each department. This approach was adopted to keep the report as clear as possible for the reader. While the report does not specifically attribute Commerce’s IPR efforts to ITA, several examples of Commerce’s efforts that are listed in the report are, in fact, ITA activities. For example, in addition to the activities cited in point 2 above, Commerce (meaning ITA) is also mentioned as a participant in annual GSP and Special 301 reviews (see pp. 12 and 32), and as a participant in IPR efforts in the report’s China, Russia, and Ukraine appendixes. Further, we have specified that Commerce (meaning ITA), along with USTR, is the administrator for the private sector trade advisory committee system (p. 15). The following are GAO’s comments on the Department of Homeland Security’s letter dated August 24, 2004. 1. We have added a paragraph citing the Department of Homeland Security’s work with the World Customs Organization (see p. 17). 2. We added language on p. 22 of the report that notes that a key component of DHS authority is a “border nexus.” The following are GAO’s comments on the U.S. Agency for International Development’s letter dated August 19, 2004. 1. We agree with USAID’s point that IPR protection and enforcement are not the primary responsibility of the agency. USAID and the other 9 U.S. government entities mentioned in the report have broader missions. Rather, we state that USAID and the other U.S. government entities undertake the primary U.S. government activities to improve the protection and enforcement of U.S. intellectual property overseas. 2. As we noted in the report, the decentralized structure of USAID, whereby individual country missions plan and implement training, makes it difficult for Washington-based officials to contribute timely information to the public training database or to inform the Training Coordination Group about USAID’s training efforts. Further, several members of the Training Coordination Group are frustrated with the extent of USAID's information sharing. 3. As we note in the report, USAID submits information annually following the conclusion of its own data-gathering exercise. However, this data-gathering exercise, which contributes to the USAID trade capacity building database, does not provide information needed by the Training Coordination Group, such as dates of training or contact information, that would improve coordination. In addition to those named above, Sharla Draemel, Ming Chen, Martin de Alteriis, Matt Helm, Ernie Jackson, Victoria Lin, and Reid Lowe made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Although the U.S. government provides broad protection for intellectual property, intellectual property protection in parts of the world is inadequate. As a result, U.S. goods are subject to piracy and counterfeiting in many countries. A number of U.S. agencies are engaged in efforts to improve protection of U.S. intellectual property abroad. This report describes U.S agencies' efforts, the mechanisms used to coordinate these efforts, and the impact of these efforts and the challenges they face. U.S. agencies undertake policy initiatives, training and assistance activities, and law enforcement actions in an effort to improve protection of U.S. intellectual property abroad. Policy initiatives include assessing global intellectual property challenges and identifying countries with the most significant problems--an annual interagency process known as the "Special 301" review--and negotiating agreements that address intellectual property. In addition, many agencies engage in training and assistance activities, such as providing training for foreign officials. Finally, a small number of agencies carry out law enforcement actions, such as criminal investigations involving foreign parties and seizures of counterfeit merchandise. Agencies use several mechanisms to coordinate their efforts, although the mechanisms' usefulness varies. Formal interagency meetings--part of the U.S. government's annual Special 301 review--allow agencies to discuss intellectual property policy concerns and are seen by government and industry sources as rigorous and effective. In addition, a voluntary interagency training coordination group meets about once a month to discuss and coordinate training activities. However, the National Intellectual Property Law Enforcement Coordination Council, established to coordinate domestic and international intellectual property law enforcement, has struggled to find a clear mission, has undertaken few activities, and is generally viewed as having little impact. U.S. efforts have contributed to strengthened intellectual property legislation overseas, but enforcement in many countries remains weak. The Special 301 review is widely seen as effective, but the impact of actions such as diplomatic efforts and training activities can be hard to measure. U.S. industry has been supportive of U.S. actions. However, future U.S. efforts face significant challenges. For example, competing U.S. policy objectives take precedence over protecting intellectual property in certain regions. Further, other countries' domestic policy objectives can affect their "political will" to address U.S. concerns. Finally, many economic factors, as well as the involvement of organized crime, hinder U.S. and foreign governments' efforts to protect U.S. intellectual property abroad.
Noise is one of the most significant environmental impacts of aviation. Although noise is present around virtually every airport in the country, the problem is greatest near busy commercial airports served by large jet aircraft. According to FAA, the retirement of older, louder aircraft and ground-based noise-mitigation efforts over the past 35 years have reduced by over 90 percent the number of people affected by significant aviation noise levels—defined as a 65-decibel day night level (DNL 65 dB) or greater—despite nationwide increases in population and air traffic. FAA’s estimates indicate that from 2000 to 2006 alone, the number of people affected by these noise levels dropped by more than a third, from about 780,000 to about 500,000. Nevertheless, these half million people are still exposed to significant aviation noise levels, and as communities expand near airports just outside the highly exposed areas and as air traffic increases, millions more are affected by lower levels of aviation noise. Changes in aircraft flight paths can also affect communities’ exposure to aviation noise, redirecting air traffic over some communities that were not previously exposed and diverting it from others. Both jet aircraft engines and jet airframes produce aviation noise during aircraft operations, particularly during takeoffs and landings. Moreover, certain types of aircraft contribute disproportionately to the level of noise around airports. In our 2000 report on environmental concerns and challenges for airports, we reported that the primary issue of concern identified by officials of the nation’s 50 busiest airports was the noise generated by older jet aircraft. With the implementation of technologies to reduce aircraft engine noise, efforts to reduce noise from airframes will become more important. As technologies for reducing aviation noise have advanced (see our discussion of some of these advances in the next section of this testimony), regulatory standards for jet aircraft noise have become more stringent. The Airport Noise and Capacity Act of 1990 authorized the Secretary of Transportation to reduce aviation noise through a program to phase out older, noisier aircraft – known as Stage 2 aircraft— by December 31, 1999. Aircraft owners could either retire Stage 2 aircraft weighing over 75,000 pounds or modify them with hushkits to sufficiently muffle the noise they generated to meet Stage 3 standards. FAA had adopted the Stage 3 standards in 1977, the year they were established by the International Civil Aviation Organization (ICAO), and all aircraft designed after that time were required to meet the Stage 3 standards, but previously certified aircraft designs were grandfathered until the 1990 act required that they be retired or modified. However, the act exempted aircraft weighing less than 75,000 pounds, a category that includes older business class jets. Stage 2 aircraft that weigh less than 75,000 pounds and Stage 3 aircraft that have been recertified as such after being modified with hushkits are in compliance with current standards, although these aircraft tend to be louder than new aircraft in the same weight range. Bills pending in both the House and the Senate would require, with certain exceptions, that all existing aircraft meet Stage 3 standards, including those aircraft under 75,000 pounds that are currently exempted. In addition, in July 2005, FAA issued a Federal Aviation Regulation requiring that all new jet aircraft designs be subject to the current, more stringent ICAO noise standards, known as Stage 4. Specifically, any new aircraft whose design was submitted to FAA for approval on or after January 1, 2006, must meet these standards, which are based on the Chapter 4 standards adopted by ICAO in 2001. The Stage 4 standards are 10 decibels lower on a cumulative basis than the Stage 3 standards and represent a significant reduction in noise. Since 2001, substantial progress has been made in retiring older, noisier aircraft. According to FAA, there has been a reduction of about 70 percent in the number of registered aircraft that have been modified with hushkits—mainly Boeing 727s and DC-9s. Today, there are 498 registered hushkitted aircraft, which make up about 8 percent of the U.S. commercial aircraft fleet. The replacement of these older aircraft with new, quieter aircraft has been the most important factor in decreasing noise around airports since the significant noise reductions achieved through the phaseout of Stage 2 commercial aircraft, according to FAA. Figure 1 indicates that the number of people exposed to significant noise levels has decreased even as the number of people flying has increased. Decisions that allow communities to expand near airports may expose residences, schools, hospitals, and other uses to aviation noise. Such decisions are made primarily by local governments, but airports, which cannot control development in the communities that surround them, may nevertheless be held accountable by these communities for the effects of aviation noise. Although the areas around airports exposed to significant noise levels (DNL 65 dB or greater), known as noise contours (see fig. 2), have shrunk with the retirement of older aircraft, the incompatible use of land around airports remains a problem in dealing with the effects of aviation noise. Some stakeholders have said that the gains that have been made in noise attenuation through regulation and technology are being eroded or threatened by incompatible land use. FAA set the DNL 65 dB standard that is used to measure noise contours. This standard reflects the level of noise exposure over time that FAA has determined annoys people by interfering with normal activities such as sleep, relaxation, school, and business operations. FAA has also issued guidelines that identify land uses that would not be compatible with the noise generated by a nearby airport’s operations, as well as land uses that could successfully be located close to an airport without interfering with their activity. Despite this guidance, however, strong pressure exists to develop residential areas around heavily used airports, and despite the steady decline in the number of people exposed to significant noise levels (DNL 65 dB and above), large numbers of people are still exposed to at least some noise around airports. And for FAA, population increases in areas around airports that are exposed to even moderate amounts of aviation noise pose a challenge because, given individuals’ varying sensitivity to noise, even comparatively low levels of exposure can generate community concerns. Population growth near airports also creates challenges for airports when planning expansion projects to meet the growing demand for air travel. Any efforts to limit development have implications for the tax base of local communities. As a result, as FAA noted in a 2004 report to Congress on aviation and the environment, there is a disconnect between federal aviation policy and local land-use decision-making. Until recently, evidence about trends in land use incompatible with airport activity was mostly anecdotal, but some empirical research is now available. For example research sponsored by FAA and NASA shows that for 92 commercial airports, between 1990 and 2000, “the effectiveness of existing federal land-use guidelines on reducing total noise exposure and deterring residential development inside the DNL 65 dB contours is mixed.” Moreover, according to the research, “land-use planning has done little to address the increasing population aggregation on lands near existing noise footprints.” Furthermore, according to FAA, incompatible land use is emerging as a problem around reliever airports, which predominantly service general aviation traffic that would otherwise go to nearby busy airports. These airports are located in quieter suburban and rural areas where aviation noise is more noticeable. Local governments with jurisdiction over land- use planning and development continue to permit building near airports, where developable land is comparatively plentiful. As a result, communities that did not exist when some airports were built are now opposing increases in aircraft operations and expansion at these airports. The air traffic environment for the nation’s airspace was designed and implemented in the 1960s and has undergone only minor changes over the years. However, the use of the airspace has changed significantly, with higher overall air traffic volumes and greater use of smaller and regional jet aircraft. As discussed later in this statement, FAA’s airspace redesign initiatives have the potential to improve safety and efficiency by allowing the use of new arrival and departure procedures that can reduce the impact of noise and emissions on nearby communities. At the same time, though, they have led to concerns about aviation noise in some communities that were not previously exposed to it. Airspace redesign projects usually involve changes in aircraft arrival and departure routes from airports. These changes may result in exposing some communities to less noise and others to more noise. FAA has completed over 30 airspace redesign projects, including projects around major airports such as those serving Las Vegas, Dallas-Fort Worth, Minneapolis, and Boston. According to FAA, between 2002 and 2007, airspace redesign projects have produced almost $700 million in customer benefits from reduced delays, more efficient routing, and reduced restrictions attributable to a more balanced air traffic control workload. Until recently, most airspace redesign projects have involved changes in flight paths above 10,000 feet and have therefore not had a significant impact on noise levels in communities near airports. However, FAA has approved the most ambitious airspace redesign project to date, which involves flight path changes in the New York/New Jersey/Philadelphia airspace, including changes at levels below 10,000 feet. According to FAA, this airspace is some of the most complex and congested anywhere in the world, with about one third of the nation’s commercial air traffic passing through it. Delays and congestion in this airspace or at area airports tend to ripple throughout the system. Airspace redesign projects have the potential to alleviate some of these problems at this critical chokepoint in the national airspace system. Because the airspace redesign for the New York/New Jersey/Philadelphia area will make changes to arrival and departure routes, the noise contours in the area will also change, exposing some communities to less noise and others to more. According to FAA’s analysis of the effect of the redesign, fewer people would be exposed to moderate to significant noise levels than is currently the case, but some people who live under the new flight paths would be exposed to higher though moderate levels of noise. On the basis of this analysis, the environmental impact statement prepared for the redesign project concludes that the project will not have a significant environmental impact with respect to noise. However, the possible shift in noise contours has led to significant expressions of concern, including litigation in many of the communities that could experience higher though moderate levels of aviation noise. One of these communities, which has a large minority population, contends that the redesign would disproportionately affect minority neighborhoods. This contention could raise concerns about environmental justice. We are currently reviewing the New York/New Jersey/Philadelphia airspace redesign at the request of this Subcommittee. To reduce the impact of aviation noise, FAA, in conjunction with NASA, aircraft and aircraft engine manufacturers, airlines, airports, and communities, follows what the International Civil Aviation Organization refers to as its “balanced approach.” This approach recognizes that short- term opportunities to mitigate the impact of aviation noise on communities should be combined with longer-term efforts to reduce aviation noise. Efforts include reducing noise at the source through more stringent standards; implementing noise abatement programs in communities near airports; supporting research and development programs for new technologies to make aircraft quieter, developing and implementing NextGen technologies and procedures, and restricting aircraft operations . In addition, many airports address aviation noise issues through studies, supplemental analyses, and community outreach. As aircraft whose design was approved on or after January 1, 2006, are integrated into the fleet, the new Stage 4 noise standards will be implemented. While these standards are more stringent than the prior Stage 3 standards and have been adopted internationally as well as domestically, their implementation may not have a significant impact on aviation noise levels. According to the Airports Council International- North America, which represents many of the nation’s airports and other stakeholders, the Stage 4 standards were already being met by a significant proportion of the aircraft in production when ICAO adopted its identical Chapter 4 standards in 2001. Additionally, aircraft manufacturers’ sales forecasts indicate that most of the new aircraft coming into service in the near future will be for the international market rather than for the U.S. market. During the discussions leading up to the adoption of the ICAO Chapter 4 standards, the European Union argued that more stringent noise limits would push technology toward quieter aircraft. However, under the current ICAO system, a key criterion for the adoption of new standards is that they must be found to be “technologically feasible”—that is, demonstrably capable of being introduced across a sufficient range of the fleet, as shown by the commercial deployment or deployability of technologies that can meet the specified noise reductions. Aviation industry representatives indicated that they considered the ICAO process rational for several reasons, including “not pushing the technology envelope,” which could lead to a potential trade-off with aircraft performance. Additionally, industry representatives have stated that new product development programs are already complex and pose many business and schedule risks. As a result, they believe it is inadvisable to force more aggressive standards because they could lead to delays in new programs. More recently, ICAO has formed independent review committees under its Long Term Technology Goals initiatives to begin discussions with stakeholders on technologies that might be available 10 to 20 years from now. These committees are not charged with developing standards, but rather with involving stakeholders in these early discussions and preparing a report based on these efforts that is designed to stimulate further development of the most promising technologies and better inform ICAO when new standards may need to be considered. Most airports are owned and operated by state governments and local municipalities. Therefore, the primary responsibility for addressing community concerns about noise resides with these entities. Nevertheless, airports can reduce the impact of noise on surrounding communities by undertaking measures to mitigate incompatible land use, such as acquiring noise-sensitive properties, relocating people, modifying structures to reduce noise, encouraging compatible zoning, and assisting in the sale of affected properties. FAA supports airports’ efforts to mitigate aviation noise through its voluntary noise compatibility program, known as the Part 150 Noise Compatibility Program, which provides guidance to airports on the types of land uses that are incompatible with certain levels of airport noise and encourages them to develop a noise compatibility program to reduce and prevent such uses. As part of the process, airports map the area affected by the noise and estimate the affected population. According to FAA, mitigation measures, such as soundproofing homes, have brought relief to tens of thousands of people in neighborhoods near long-established airports since the early 1980s. Airports that participate in the Part 150 program can receive noise set- aside funds from the Airport Improvement Program (AIP), which they must match to varying degrees, depending on their size. According to FAA, nearly 300 airports have participated in the program. These funds can be used to, among other things, soundproof buildings and support relocation by acquiring homes in areas with significant noise. Thirty five percent of AIP discretionary funds are reserved for planning and implementing noise compatibility programs. In fiscal year 2006, FAA issued 90 noise-related AIP grants totaling $305 million. Since the early 1980s, the federal government has issued grants or allowed airports to impose charges to mitigate noise around many airports. According to FAA, it has provided about $5 billion in AIP grants and airports have used about $2.8 billion in passenger facilities charges (PFC) for Part 150 noise mitigation studies and projects. In total, this funding amounts to nearly $8 billion (see table 1). FAA officials further noted that while the vast majority of airport noise mitigation projects use some AIP or PFC funding, airports may undertake projects with other financing. Although all airports are eligible to participate in the Part 150 program, some of the busiest commercial airports do not. Among these are New York’s JFK International and La Guardia, Newark International, Houston’s George Bush Intercontinental, Dallas-Fort Worth International, Boston- Logan International, Dulles International, O’Hare International, and Miami International (see app. I for a list of those airports among the 50 busiest that do not participate in the Part 150 program). According to FAA, some airports have chosen not to participate in the Part 150 program for a variety of reasons. Some airport operators view the program as too complicated, costly, and difficult to implement. FAA officials note that some larger airports that have chosen not to participate in the program may have such a significant number of incompatible land uses that it would be financially prohibitive to implement mitigation measures in all areas significantly affected by noise and that the projects that were undertaken could take decades to complete. In addition, in some cases, neighborhoods are so clustered together that mitigation measures would have to be applied to a substantial number of homes outside significant noise contours in order to establish equitable neighborhood boundaries. FAA officials further note that an airport’s nonparticipation in the Part 150 program does not mean that the airport does not have an airport noise mitigation program. For example, Boston Logan Airport has a noise program that predates the Part 150 program and qualifies for federal noise mitigation funding under the program through a grandfathering provision. Airports can also use AIP discretionary grant and PFC funds for noise mitigation without joining the Part 150 program. In addition, some soundproofing of schools and healthcare facilities is eligible for federal funding even if an airport does not participate in the Part 150 program. Besides providing funding for airports’ noise mitigation efforts through the Part 150 program, FAA published draft guidance in June 2007 on the acquisition, management and disposal under AIP of noise land—that is, land that is exposed to significant noise levels. The guidance initiative was in part a response to the findings of an audit by the Department of Transportation Inspector General of 11 airports that disposed of land acquired for noise mitigation purposes. The audit found that each of the 11 airports had noise land acquired with AIP funds, ranging from nominal acreage at several airports to hundreds of acres at others, that either was no longer required for noise compatibility purposes or did not have a documented need for airport development. The Inspector General concluded that with improved oversight of noise land and its disposal, FAA could recover an estimated $242 million for the Airport and Airways Trust Fund, which provides most of the funding for aviation programs, or for other airport noise mitigation projects. This finding was particularly important in light of the constrained resources that are available for all aviation programs. The final FAA guidance, which is scheduled for issuance by the end of calendar year 2007, explains the current options for reinvesting or transferring the proceeds from the sale of noise land acquired under AIP, giving preference to investment in airport noise compatibility projects. Provisions in the House and Senate reauthorization proposals would authorize these options. These provisions have the potential to help airports further mitigate the adverse effects of the incompatible land uses around airports and could provide additional resources for noise mitigation and other AIP-eligible investments. The House reauthorization bill (H.R. 2881) also contains other provisions that, if enacted, could enhance FAA’s and airports’ efforts to mitigate the impact of noise on communities. Section 503 would allow FAA to accept funds from airport sponsors to conduct special environmental studies to support approved noise compatibility measures for federally funded airport projects. In addition, Section 504 would allow FAA to accept funds, including AIP grants and PFC funds, from a sponsor in order to hire staff or obtain services to provide environmental reviews for new flight procedures that have been approved for airport noise compatibility purposes. Finally, Section 507 would authorize a new pilot program to allow FAA to fund six environmental mitigation demonstration projects at public-use airports to take previously laboratory-tested environmental research concepts into the airport environment in order to determine if they can measurably reduce or mitigate the environmental impacts of aviation noise or emissions. Research and development of technologies for reducing aviation noise has led to advancements that have significantly reduced the amount of noise produced by aircraft, and this research continues, although further advancements will be challenging. NASA, FAA, academic institutions, and the aircraft and manufacturing industry are all involved in research and development projects aimed at reducing aviation noise and its impacts. NASA, in partnership with the aircraft and aircraft engine manufacturing industry, has contributed to a number of advancements in aircraft engine and airframe technology that have substantially reduced the amount of noise produced by aircraft and may lead to further reductions in the future, depending on the extent to which current research leads to noise- reducing aircraft engine and airframe designs. For example, through partnerships with industry, NASA has conducted research on engine noise reduction technologies that have significantly reduced aviation noise. Research on the use of composites has also enabled reductions in the weight of aircraft, which affects the amount of noise the airframe produces. As a result of these and other advancements, the newest aircraft currently in production will produce substantially less noise than the models they will replace. For example, Boeing estimates that the 787 aircraft will produce 60 percent less noise than the 767 and the noise from the 747-800 will be 30 percent less than the 747-400 it is replacing. Similarly, Airbus says that its new A-380 jumbo jet will produce 46 percent less noise than the 747-400. However, industry representatives have indicated that returns are diminishing from these types of improvements. FAA conducts a significant amount of its research on aviation noise issues, much of it through the Partnership for Air Transportation Noise and Emission Reduction (PARTNER), the Department of Transportation’s Volpe National Transportation Systems Center, and other entities. PARTNER is a Center of Excellence that brings together experts from government, academia, and industry. Sponsored by FAA, NASA and Transport Canada, PARTNER includes 11 collaborating universities and approximately 50 advisory board members who represent aerospace manufacturers, airlines, airports, state and local governments, and professional and community groups. The collaborating universities and organizations represented on the advisory board provide equal matches for federal funds for research and other activities. PARTNER projects related to aviation noise involve testing alternative descent patterns; identifying a means to reduce aircraft landing noise, fuel consumption, and emissions; assessing the human health and welfare risks of aviation noise; and developing online resources to better inform the public about aviation noise issues. According to FAA, in the last 10 years, it has spent about $42 million on research to characterize noise and improve prediction methods, including developing a capability to determine the trade-offs between noise and emissions and quantifying the costs and benefits of various mitigation strategies. Federal funding for aviation noise research has declined over the past decade, particularly for NASA, which provides most of the federal funding for aeronautics research. NASA’s budget for aeronautics research has dropped by about half over the past decade and is about $717 million for fiscal year 2007. Partly to address this overall funding reduction, NASA has reorganized its aeronautical research portfolio to focus on what it calls “fundamental” research—a relatively early stage in the research and development process that is less costly than the later stages. According to FAA, the combination of a dramatic decrease in NASA’s funding and the reorganization of its aeronautical research portfolio to focus on fundamental research has left a gap in the near- and mid-term applied research and development that could produce technological solutions within the NextGen time frame. According to FAA, most of the federal funding available for mitigating aviation noise is targeted to sound insulation projects for buildings around airports and relocation or acquisition programs. In a 2002 report on reducing the environmental impacts of aviation, the National Research Council’s Committee on Aeronautics Research and Technology for Environmental Compatibility noted that the vast majority of federal expenditures on aviation noise are allocated to noise abatement at individual airports rather than to research on quieter aircraft and engines, which would ultimately reduce aviation noise nationally and internationally. The report concluded that the funding for federal research programs was too low to remove noise as an impediment to the growth of aviation—a conclusion that FAA reiterated in its 2004 report to Congress on aviation and the environment. An analysis prepared by the Aerospace Industries Association indicates that NASA’s aeronautics budget, which includes funding for noise reduction research, has been declining in constant dollars since the mid-1990s (see fig. 3). FAA officials told us that both the Senate and the House reauthorization proposals for FAA include several provisions for funding programs that the authorizers believe will be critical to address the research gap. For example, the CLEEN Engine and Airframe Technology Partnership would create a program for the development, maturation, and certification of engine and airframe technologies for aircraft over the next 10 years to reduce aviation noise and emissions. FAA said that the program is intended to provide some short-term advancement while NASA focuses on longer-term research on noise and emissions. NASA officials told us the agency has become more effective in targeting its research resources to areas that have the most potential for success. In particular, these officials cited work on significant noise-reducing technologies that could be implemented in aircraft and engine designs as early as 2015, depending on whether manufacturers take over responsibility for integrating the new technologies into production-ready aircraft. NASA has set goals for developing technologies that could reduce what is known as effective perceived noise (EPN) by 42 EPN dB below Stage 3 standards and that could be implemented in the next generation of aircraft, which NASA refers to as N+1, by 2015 (N is the current generation of advanced twin-engine aircraft). For the longer term (2020), NASA is focusing on the development of tools and technologies that can be used in the design of advanced hybrid wing body aircraft (N+2) and that would achieve even greater noise reductions, in the range of 52 EPN dB below Stage 3 standards. According to NASA, both of these research efforts are also aimed at reducing emissions and fuel burn, which in combination with noise reductions would help mitigate the environmental effects of future increases in air traffic. NASA officials stress that because NASA’s research ends at a relatively early stage of development, aircraft and engine manufacturers would need to take over responsibility for integrating the noise reduction improvements into aircraft and engine designs, and their assumption of this responsibility is not guaranteed. NASA and others in the aeronautics research community are working on similar advanced designs, such as the “silent aircraft” concept that involves researchers from Cambridge University in Great Britain and the Massachusetts Institute of Technology (see fig. 4). Part of the planning for NextGen includes reducing the environmental impact of aviation because concerns about aviation noise and emissions, which will increase with the expected growth in air traffic, are strong constraints on system capacity. A preliminary JPDO analysis shows that noise and emissions could increase between 140 and 200 percent over the next 20 years as a result of increased flights, which would become a significant constraint on planned capacity improvements. Technologies and procedures that are being developed as part of NextGen to improve the efficiency of flight operations are also expected to help reduce the impact of noise. One such technology, considered a centerpiece of the NextGen system, is the Automatic Dependent Surveillance–Broadcast (ADS-B) satellite aircraft navigational system. ADS-B is designed, along with other navigation technologies, to provide for more precise control of aircraft during approach and descent. This improved control will facilitate the use of various air traffic control procedures that will reduce communities’ exposure to aviation noise and emissions. For example, the Continuous Descent Arrivals (CDA) procedure (see fig. 5) is expected to allow aircraft to remain at cruise altitudes longer as they approach destination airports, use lower power levels, and thereby lower noise and emissions during landings. Under current landing procedures, aircraft make step-down approaches that alternate short descents and forward thrusts, which produce more noise than a continuous descent. The PARTNER Center of Excellence has designed and flight-tested a nighttime CDA procedure for the Louisville International Airport, which United Parcel Service plans to begin using for its hub operations in the near future. Similarly, Area Navigation/Required Navigation Performance (RNP) procedures will permit aircraft to descend on a precise route that will allow them to avoid populated areas. FAA notes, however, that the new procedures will not always be usable when traffic is heavy at busy airports (see fig. 6). Airports can seek restrictions on the operations of certain types of aircraft to reduce the impact of noise on surrounding communities. FAA implements a national program for reviewing airport noise and access restrictions, known as Part 161. Through this program, FAA reviews airports’ requests to limit the operations of louder aircraft. According to FAA, the Part 161 process has rarely been used since 2000. Only a few airports have drafted Part 161 studies to support requests for restrictions, and only one—Naples Airport in Florida—has fully completed the Part 161 process. Los Angeles International Airport and Bob Hope Airport in Burbank, California, have indicated to FAA that they will be submitting Part 161 studies to FAA to restrict the operations of certain aircraft that meet the Stage 3 noise standards. FAA’s approval will be required for the restrictions these airports are seeking. Because the Part 161 process demands that airports submit studies showing, among other things, the benefits of restricting aircraft operations, airport operators generally choose to negotiate informal agreements with airlines rather than seek mandatory restrictions. Airports have also imposed curfews on aircraft operations in order to reduce the impact of noise in the early morning and late evening. For example, at Reagan National Airport and San Diego International Airport, louder aircraft are not allowed to land or take off in the late evening and early morning. According to FAA, communities are increasingly aware of efforts to plan for and mitigate aviation noise, and complaints about noise are coming increasingly from outside the DNL contours, along with demands for action to address noise in areas outside significant noise contours. Some community groups and the Environmental Protection Agency (EPA) have questioned whether the DNL standard adequately captures the impact of noise on people. FAA officials note that the Federal Interagency Committee on Aviation Noise supports the use of the DNL measure and that the use of the metric to measure noise near airports has been upheld in court decisions. However, a number of airports have undertaken additional measures, such as special noise studies, to respond to community concerns about aviation noise. According to some noise experts, the typical airport noise study presents results only in terms of DNL contours on a background map, but very rarely quantifies noise exposure with DNL or any other metric at specified geographic locations in the study area. While DNL contours are used effectively to establish land-use guidelines and define noise mitigation program boundaries, they do not provide residents with practical information about the aviation noise they will experience in their homes. By contrast, the special noise studies not only enable residents to locate their homes on a map that is overlaid with DNL contours, but they also indicate how often airplanes fly overhead, at what time of day flights occur, or how those flights may interfere with activities such as sleeping, speaking, or watching television. According to the experts we spoke with, the public has responded very positively to receiving this detailed information about noise exposure. With growing complaints about noise from outside the DNL contours, airports are also contracting for analyses based on alternative noise metrics to supplement the DNL noise analysis. Although the Federal Interagency Committee on Noise in 1992 recommended continuing the use of the DNL noise metric as the principal means of describing airport noise exposure, it also recommended supplementing this description with noise analyses based on alternative metrics. According to a leading engineering firm that specializes in performing noise analyses, two supplemental metrics are thought to define exposure in ways that the general public can understand more readily than the DNL metric. One of these metrics, the Number Above—which counts how many times noise exceeds a selected threshold level in a given time period—has emerged as the most useful supplemental metric, while another metric, Time Above— the total time that noise exceeds the threshold during the time period—is also being used with increasing frequency. According to FAA officials, FAA supports the use of supplemental metrics, noting that they may be useful in evaluating some specific noise impacts, such as interference with speech, sleep, and learning (see fig. 7). Besides additional studies and supplemental noise metrics, airports are using community outreach and education to address some of the impacts of aviation noise. Representatives of airports and local governments we spoke with emphasized that effective community outreach programs are essential for addressing noise issues that arise when airports are planning to expand or change their operations. One of these representatives noted that early and continuous open communication between the airport, local governments, and the affected communities is a key to gaining support for projects to increase airport capacity. They pointed out that airports should have ongoing efforts to seek stakeholder involvement on airport-related issues and not wait until potential noise problems arise, such as when airport expansion projects are being planned. For example, the San Francisco International Airport has been bringing community representatives and aviation officials together since 1981 to discuss and attempt to resolve airport-related issues through the San Francisco Roundtable—a voluntary body created by the airport that includes representatives from 45 Bay Area jurisdictions, FAA officials, airline advisers, air traffic managers, and the airport director. In addition, according to a San Francisco International Airport official, the airport reaches out to the community through its Managed Noise Mitigation program, which encourages communities affected by airport noise to determine their noise mitigation priorities and manage their distribution of noise mitigation funds in accordance with their priorities. Other airports have also made community outreach an important component of their efforts to deal with the impacts of aviation noise. For instance, Chicago established the O’Hare Noise Compatibility Commission in 1996 to begin constructive dialogue on aircraft noise issues with the 40 communities surrounding O’Hare International Airport. The commission’s community outreach efforts include a Web site on aircraft noise issues; a community outreach vehicle that travels to schools, libraries, and community events and provides aircraft noise and noise-monitoring demonstrations; and a quarterly newsletter that highlights the work of the commission and its work to reduce noise at O’Hare. To support airports’ community outreach efforts, the Transportation Research Board (TRB) is undertaking a project that is intended to result in guidance for airports on best practices in community outreach. According to TRB, the project will identify the jurisdictions with authority over various aspects of aviation noise and the obstacles to airport operations and development that can occur because of surrounding communities’ negative perceptions about local aviation noise. The study will result in a guidebook about local aviation noise that will allow airport decision makers to manage expectations related to aviation noise within the community. The study also includes alternative ways to communicate noise issues and suggests other improvements that can help ease concerns about aviation noise issues. Reducing aviation noise requires technological advances, substantial funding from government and the aviation industry, and cooperation among stakeholders and communities on land-use issues. Fulfilling these requirements will be challenging because the pace of improvement in existing technologies may have slowed, government and industry resources are constrained, and land use involves strong competing interests. While most of these challenges will take years to fully address, steps can be taken now to help mitigate the impact of noise on communities and reduce the constraints that noise can have on transforming the air traffic system. The first challenge will be to continue reducing the amount of noise from aircraft engines and airframes. NASA’s, FAA’s, and manufacturers’ past research and development efforts have led to advances that have significantly lowered aviation noise, but the timing of the next leaps in technologies is uncertain. While NASA is conducting work on technologies that it believes could, with industry support, lead to significant noise reductions by 2015, FAA and aircraft industry representatives maintain that, for some time, reductions in aircraft noise are likely to be incremental. In addition, it may be technologically challenging to improve the environment by reducing aviation noise without adversely affecting the environment in other ways. As we reported in 2003, designing aircraft engines to minimize noise could increase fuel burn, which would release more carbon dioxide and other greenhouse gases into the atmosphere. Funding noise reduction research and development programs poses a challenge for federal agencies. Given the federal government’s long-term structural fiscal imbalance, additional funding for such programs may not be available without shifting funds from other aviation noise reduction efforts, such as programs to mitigate the impact of noise on communities. Currently, most of the federal funding for reducing aviation noise goes to soundproofing programs. Although funding for noise mitigation programs may not generate the highest return on investments, reducing such funding could make it more difficult to obtain community approval of airport expansion projects necessary to increase system safety and efficiency. Provisions in the Senate and House reauthorizations bills such as the CLEEN proposal could help to address the challenges in this area, and industry funding will continue to play an important role. Implementing new noise reduction technologies, whether by integrating new, quieter aircraft into the fleet or by retrofitting aircraft, poses financial challenges for the aviation industry. Aircraft have an average lifespan of about 30 years, and it can take almost that entire period for airlines to pay for an aircraft. The current fleet is, on average, about half as many years old—11 years for wide-body aircraft and 14 years for narrow-body aircraft—and is therefore expected to be in operation for many years to come. Additionally, the financial pressures facing many airlines make it difficult for them to upgrade their fleets with new, quieter aircraft. Currently, for example, U.S. carriers have placed a small proportion of the over 700 orders (40, or less than 6 percent) that Boeing officials say the company has received for its new state-of-the-art 787. These financial pressures also have implications for airlines’ ability to equip new and existing aircraft with NextGen technologies such as ADS-B that can enable more efficient, quieter approaches and descents. FAA estimates that it will cost the industry about $14 billion to equip aircraft to take full advantage of NextGen. Congress and FAA may want to consider how to incentivize the airlines to train their pilots and to equip and retrofit the fleet with the technologies necessary to operate in NextGen as soon as possible. Even with the introduction of quieter aircraft and the implementation of NextGen technologies and procedures that will enable quieter aircraft approaches and landings, there will still be some noise around airports. Additionally, these reductions in aviation noise are likely to be eroded by the public’s increasing awareness of and sensitivity to even moderate amounts of aviation noise and to predicted increases in the number of aircraft flying overhead. Hence, incompatible land use will continue to present obstacles to airport expansion projects. However, since most airports are owned and managed by state or local authorities, it is incumbent upon those authorities to work in good faith with FAA to minimize incompatible land use in their jurisdictions (see fig. 8). State and local authorities can take action, through land-use planning and development, zoning, and housing regulation, to limit the use of land near airports to purposes compatible with airport operations. State and local governments could require, for example, that appropriate notice of airport noise exposure be provided to purchasers of real estate and to prospective residents near airports to ensure awareness of aviation noise issues. In addition, FAA can make it easier for airports to dispose of AIP noise land by completing and issuing its draft guidance on this process. Passing the related provisions in the Senate and House FAA reauthorization bills will also be important steps. Thank you, Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I will be glad to answer any questions that you may have at this time. For further information on this testimony, please contact Dr. Gerald L. Dillingham at (202) 512-2834 or dillinghamg@gao.gov. Individuals making key contributions to this testimony include Ed Laughlin, Lauren Calhoun, Bess Eisenstadt, Jim Geibel, David Hooper, Rosa Leung, Maureen Luna- Long, Josh Ormond, Jena Sinkfield, and Larry Thomas. Gillespie Field (San Diego, CA) John F. Kennedy International (New York, NY) Van Nuys (Van Nuys, CA) Airport Finance: Observations on Planned Airport Development Costs and Funding Levels and the Administration’s Proposed Changes in the Airport Improvement Program. GAO-07-885. Washington, D.C.: June 29, 2007. Reagan National Airport: Update on Capacity to Handle Additional Flights and Impact on Other Area Airports. GAO-07-352. Washington, D.C.: February 28, 2007. Aviation and the Environment: Strategic Framework Needed to Address Challenges Posed by Aircraft Emissions. GAO-03-252. Washington, D.C.: February, 28, 2003. Aviation Infrastructure: Challenges Related to Building Runways and Actions to Address Them. GAO-03-164. Washington, D.C.: January 30, 2003. Aviation and the Environment: Airport Operations and Future Growth Present Environmental Challenges. GAO/RCED-00-153, Washington, D.C.: August 30, 2000. Aviation and the Environment: Results from a Survey of the Nation’s 50 Busiest Commercial Service Airports. GAO/RCED-00-222. Washington, D.C.: August 30, 2000. Aviation and the Environment: FAA’s Role in Major Airport Noise Programs. GAO/RCED-00-98. Washington, D.C.: April 28, 2000. Reagan National Airport: Limited Opportunities to Improve Airlines’ Compliance with Noise Abatement Procedures. GAO/RCED-00-74. Washington, D.C.: June 29, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
To address projected increases in air traffic and current problems with aviation congestion and delays, the Joint Planning and Development Office (JPDO), an interagency organization within the Federal Aviation Administration (FAA), is working to plan and implement a new air traffic management system, known as the Next Generation Air Transportation System (NextGen). This effort involves implementing new technologies and air traffic control procedures, airspace redesign, and infrastructure developments, including new or expanded runways and airports. Community opposition is, however, a major challenge, largely because of concerns about aviation noise. As a result, according to JPDO, aviation noise will be a primary constraint on NextGen unless its effects can be managed and mitigated. GAO's requested testimony addresses (1) the key factors that affect communities' level of exposure to aviation noise, (2) the status of efforts to address the impact of aviation noise, and (3) major challenges and next steps for reducing and mitigating the effects of aviation noise. The testimony is based on prior GAO work (including a 2000 survey of the nation's 50 largest airports), updated with reviews of recent literature, FAA data and forecasts, and interviews with officials from FAA and the National Aeronautics and Space Administration (NASA), industry and community representatives, and aviation experts. Key factors affecting the level of aviation noise that communities are exposed to include jet aircraft operations, land uses around airports, and aircraft flight paths. With more stringent regulatory standards for aviation noise, enabled by advances in technology, aircraft operations have become quieter, but aviation noise is still a problem when communities allow incompatible land uses, such as residences, schools, and hospitals, near airports. Aircraft flight paths also expose communities to aviation noise, and airspace redesign efforts, which are intended to improve aviation system safety and efficiency, may expose some previously unaffected communities to noise, raising concerns in those communities about higher noise levels. A number of efforts are underway or planned to address the impact of aviation noise on communities. More stringent noise standards for aircraft have been implemented, billions of federal dollars have been spent to soundproof buildings around airports, federal and private funding for research and development has advanced technologies to reduce aviation noise, NextGen technologies and procedures are being planned and will contribute to reducing communities' exposure to noise, some airports have imposed restrictions on the operation of certain aircraft, and airports are reaching out to communities to address their concerns about aviation noise and gain support for projects to increase airports' safety and efficiency. Major challenges for reducing or mitigating the effects of aviation noise include continuing to make technological advances; obtaining substantial funding--from the federal government for NextGen in particular and from industry for equipping aircraft with new technologies--and cooperating on land-use issues. Next steps could include state and local actions to limit incompatible development, FAA's issuance of guidance related to the disposal of land acquired with federal funding for noise mitigation purposes, and the passage of legislative proposals that would address environmental issues, including the reduction of aviation noise. FAA and NASA officials generally agreed with the information presented in this testimony and provided technical clarifications that GAO incorporated.
State and local governments generally have the principal responsibility for meeting mass care and other needs in responding to a disaster; however, governments largely carry out this responsibility by relying on the services provided by voluntary organizations. Voluntary organizations provide sheltering, feeding, and other services, such as case management, to disaster victims and have long supported local, state, and federal government responses to disasters. Voluntary organizations have historically played a critical role in providing services to disaster victims, both on a routine basis—in response to house fires and local flooding, for example—and in response to far rarer disasters such as devastating hurricanes or earthquakes. Their assistance can vary from providing immediate services to being involved in long-term recovery efforts, including fund-raising. Some are equipped to arrive at a disaster scene and provide immediate mass care, such as food, shelter, and clothing. Other charities address short-term needs, such as providing case management services to help disaster victims obtain unemployment or medical benefits. Other voluntary organizations provide long-term disaster assistance such as job training or temporary housing assistance for low- income families. In addition, local organizations that do not typically provide disaster services may step in to address specific needs, as occurred when churches and other community organizations began providing sheltering after the Gulf Coast hurricanes. The American Red Cross, a nongovernmental organization founded in 1881, is the largest of the nation’s mass care service providers. Operating under a congressional charter since 1900, the Red Cross provides volunteer humanitarian assistance to the armed forces, serves as a medium of communication between the people of the United States and the armed forces, and provides direct services to disaster victims, including feeding, sheltering, financial assistance, and emergency first aid. An additional key player in the voluntary sector is NVOAD, an umbrella organization of nonprofits that are considered national in their scope. Established in 1970, NVOAD is not itself a service delivery organization but rather coordinates planning efforts by many voluntary organizations responding to disaster, including the five organizations in this review. In addition to its 49 member organizations, NVOAD also coordinates with chartered state Voluntary Organizations Active in Disaster (VOAD) and their local affiliates. The occurrence in 2005 of Hurricanes Katrina and Rita revealed many weaknesses in the federal disaster response that were subsequently enumerated by numerous public and private agencies—including the GAO, the White House, and the American Red Cross. These weaknesses included a lack of clarity in roles and responsibilities among and between voluntary organizations and FEMA and a need for the government to include voluntary organizations in national and local disaster planning. According to several post-Katrina reports, the contributions of voluntary organizations, especially faith-based groups, had not been effectively integrated into the earlier federal plan for disaster response—the 2004 National Response Plan. These reports called for better coordination among government agencies and voluntary organizations through cooperative relationships and joint planning and exercises. Under the Homeland Security Act, which President Bush signed in 2002, as amended by the Post-Katrina Emergency Management Reform Act of 2006 (Post-Katrina Act), FEMA has been charged with responsibility for leading and supporting a national, risk-based, comprehensive emergency management system of preparedness, protection, response, recovery, and mitigation. In support of this mission, FEMA is required to partner with the private sector and nongovernmental organizations, as well as state, local, tribal governments, emergency responders, and other federal agencies. Under the act, FEMA is specifically directed, among other things, to build a comprehensive national incident management system; consolidate existing federal government emergency response plans into a single, coordinated national response plan; administer and ensure the implementation of that plan, including coordinating and ensuring the readiness of each emergency support function under the plan; and update a national preparedness goal and develop a national preparedness system to enable the nation to meet that goal. As part of its preparedness responsibilities, FEMA is required to develop guidelines to define risk-based target capabilities for federal, state, local, and tribal preparedness and establish a comprehensive assessment system to assess, on an ongoing basis, the nation’s prevention capabilities and overall preparedness. FEMA is also required to submit annual reports which describe, among other things, the results of the comprehensive assessment and state and local catastrophic incident preparedness. FEMA may also use planning scenarios to reflect the relative risk requirements presented by all kinds of hazards. As we noted in previous reports and testimony, the preparation for a large-scale disaster requires an overall national preparedness effort designed to integrate what needs to be done (roles and responsibilities), how it should be done, and how well it should be done. The principal national documents designed to address each of these questions are the National Response Framework, the National Incident Management System, and the National Preparedness Guidelines. A core tenet of these documents is that governments at all levels, the private sector, and nongovernmental organizations, such as the Red Cross and other voluntary organizations, coordinate during disasters that require federal intervention. (See fig. 1.) DHS’s National Response Framework, which became effective in March 2008, delineates roles for federal, state, local, and tribal governments; the private sector; and voluntary organizations in responding to disasters. The new framework revises the National Response Plan, which was originally signed by major federal government agencies, the Red Cross, and NVOAD in 2004. Under the National Response Framework, voluntary organizations are expected to contribute to these response efforts through partnerships at each level of government. In addition, FEMA, in conjunction with its voluntary agency liaisons, acts as the interface between these organizations and the federal government. (See fig. 2.) The Framework also creates a flexible and scalable coordinating structure for mobilizing national resources in a large-scale disaster. Under the Framework, local jurisdictions and states have lead responsibility for responding to a disaster and can request additional support from the federal government as needed. In addition, for catastrophic incidents that almost immediately overwhelm local and state resources and result in extraordinary levels of mass casualties or damage, the Framework—through its Catastrophic Incident Supplement—specifies the conditions under which the federal government can proactively accelerate the national response to such disasters without waiting for formal requests from state governments. The Supplement was published in 2006 after Hurricane Katrina. The National Framework organizes the specific needs that arise in disaster response into 15 emergency support functions, or ESFs. Each ESF comprises a coordinator, a primary agency, and support agencies—usually governmental agencies—that plan and support response activities. Typically, support agencies have expertise in the respective function, such as in mass care, transportation, communication, or firefighting. In a disaster, FEMA is responsible for activating the ESF working groups of key federal agencies and other designated organizations that are needed. For the voluntary organizations in our review, Emergency Support Function 6 (ESF-6) is important because it outlines the organizational structure used to provide mass care and related services in a disaster. These services are mass care (e.g., sheltering, feeding, and bulk distribution of emergency emergency assistance (e.g. evacuation, safety, and well-being of pets), disaster housing (e.g., roof repair, rental assistance), and human services (e.g., crisis counseling, individual case management). Under ESF-6, FEMA is designated as the primary federal agency responsible for coordinating and leading the federal response for mass care and related human services, in close coordination with states and others such as voluntary organizations—a role change made in 2008 in response to issues that arose during Katrina. FEMA carries out this responsibility by convening federal ESF-6 support agencies during disasters and coordinating with states to augment their mass care capabilities as needed. Under ESF-6, the Red Cross and NVOAD are each named as support agencies to FEMA, along with numerous federal departments, such as the Department of Health and Human Services. FEMA’s voluntary agency liaisons, located in FEMA regions, are largely responsible for carrying out these coordinating duties with voluntary organizations. As private service providers fulfilling their humanitarian missions, the voluntary organizations in our review have historically served as significant sources of mass care and other services in large-scale disasters and play key roles in national response—in coordination with local, state, and federal governments—under the National Response Framework. While their response structures differ in key ways—with some having more centralized operations than others, for example—these voluntary organizations coordinate their services through formal written agreements and through informal working relationships with other organizations. In recognition of their long-standing leadership in providing services to disaster victims, these organizations, especially the American Red Cross and NVOAD, have considerable roles in supporting FEMA under the nation’s National Response Framework. While this new Framework shifted the Red Cross from a primary agency for mass care to a support agency, largely because the Red Cross cannot direct federal resources, the 2006 Catastrophic Incident Supplement has not been updated to reflect this change. FEMA does not currently have a timetable for revising the Supplement, as required under the Post-Katrina Act, and while FEMA and Red Cross officials told us that they have a mutual understanding of the Red Cross’s role as a support agency in a catastrophic disaster, this understanding is not currently documented. While the major national voluntary organizations in our review differ in their types of services and response structures, they have all played important roles in providing mass care and other services, some for over a century. According to government officials and reports on the response to Katrina, the Red Cross and the other voluntary organizations we reviewed are a major source of mass care and other disaster services, as was evident in the response to Hurricane Katrina. The five voluntary organizations we reviewed differ in the extent to which they focus on providing disaster services and in the types of services they provide. Four of the five organizations directly provide a variety of mass care and other services, such as feeding and case management, while the fifth—the United Way—focuses on fund-raising for other organizations. As the nation’s largest disaster response organization, the Red Cross is the only one of the five in our review the core mission of which is to provide disaster response services. In providing its services, the Red Cross typically coordinates with state and local governments to support their response and has formal agreements with state or local emergency management agencies to provide mass care and other disaster services. For example, the Red Cross serves as a support agency in the Washington, D.C., disaster response plan for mass care, feeding, and donations and volunteer management. In contrast to the Red Cross, The Salvation Army, the Southern Baptist Convention, and Catholic Charities are faith-based organizations that provide varying types and degrees of disaster services – some for decades—as an extension of their social and community service missions. The United Way raises funds for other charities and provides resources to local United Way operations, but does not directly provide services to survivors in response to disasters. (See table 1.) While voluntary organizations have traditionally played an important role in large-scale disasters, their role in response to Hurricane Katrina, the largest natural disaster in U.S. history, was even more significant, especially for the three mass care service providers in our study—the Red Cross, The Salvation Army, and the Southern Baptist Convention. For example, after Katrina, the Red Cross provided more than 52.6 million meals and snacks and opened more than 1,300 shelters across 27 states, while the Southern Baptist Convention provided more than 14.6 million meals and The Salvation Army provided 3.8 million articles of clothing. While Catholic Charities USA and its affiliates do not generally provide mass care services, during Katrina it assisted with feeding by donating food. (See table 2.) The four direct service providers in our study—the Red Cross, The Salvation Army, the Southern Baptist Convention, and Catholic Charities—each have distinct disaster response structures, with their national offices having different levels of authority over the organization’s affiliates and resources, reflecting a continuum from more centralized operations, such as the Red Cross, to more decentralized operations, such as Catholic Charities USA. For example, in a large-scale disaster, the national office of the Red Cross directly sends headquarters-based trained staff, volunteers, and equipment to the affected disaster site, while Catholic Charities USA’s disaster response office provides technical assistance to the affected member dioceses but does not direct resources. (See table 3.) Similarly, to facilitate its ability to direct a nationwide response from headquarters, the Red Cross has a national headquarters and service area staff of about 1,600 as of May 2008, maintains a 24/7 disaster operations center at its headquarters, and has a specially trained cadre of over 71,000 volunteers who are nationally deployable, according to the Red Cross. In contrast, the Southern Baptist Convention and Catholic Charities each have 1 or 2 staff at their national offices who are responsible for disaster response coordination for their organizations. These differences in the national offices’ roles within the voluntary organizations means that when voluntary organizations respond to disasters of increasing magnitude by “ramping up”—a process similar to the scalable response described in the National Response Framework— they do so in different ways and to different extents. While the voluntary organizations in our review coordinate with one another and with the government, their disaster response structures are not necessarily congruent with the response structures of other voluntary organizations or aligned geographically or jurisdictionally with those of government. In essence, the voluntary organizations’ response structures do not necessarily correspond to the local, state, and federal structures of response—as described in the National Framework. For example, The Salvation Army and Catholic Charities are not aligned geographically with states, while the Southern Baptist Convention is aligned roughly along state lines, called state conventions, and the Red Cross’s organizational structure supports regional chapter groupings, which are also aligned generally by state. Furthermore, while the Red Cross and The Salvation Army have regional or larger territorial units, these are not necessarily congruent with FEMA’s 10 regions. (See table 4). In a similar vein, these service providers do not necessarily follow the command and control structure typical of the federal incident command system set forth in the National Incident Management System (NIMS) for unifying disaster response. These organizations vary in the extent to which they have adopted this command system, according to officials we spoke with. For example, organization officials told us that the Red Cross, The Salvation Army, and the Southern Baptist Convention use this command system, while Catholic Charities does not. The voluntary organizations in our review coordinate and enhance their service delivery through formal written agreements at the national level. While not all of the voluntary organizations have such agreements with each other, the Red Cross maintains mutual aid agreements with the national offices of The Salvation Army, the Southern Baptist Convention, and Catholic Charities USA, as well as 39 other organizations with responsibilities under ESF-6. For example, under a 2000 agreement between the Red Cross and the Southern Baptist Convention, a feeding unit addendum describes operations and financial responsibilities when the two organizations provide mass feeding services cooperatively. According to Southern Baptist Convention officials, the general premise of this agreement is that the Convention will prepare meals in its mobile feeding units, while the Red Cross will distribute these meals using its emergency response vehicles. According to many of the voluntary organization officials we interviewed, another essential ingredient for response is to have active, informal working relationships with leaders of other organizations that are well established before disasters strike. These relationships are especially important when organizations do not have formal written agreements or when the agreements do not necessarily represent the current relationship between two organizations. Regular local VOAD meetings and joint training exercises with local and state governments facilitate these working relationships by providing an opportunity for relationship building and informal communication. For example, a Florida catastrophic planning exercise in 2006-2007 brought together 300 emergency management professionals and members of the Florida VOAD to develop plans for two types of catastrophic scenarios. According to disaster officials, relationships built through this type of interaction allow participants to establish connections that can be drawn upon during a disaster. The National Response Plan that was instituted after September 11, and the 2008 National Response Framework, which superseded it, both recognized the key role of the Red Cross and NVOAD member organizations in providing mass care and other services by giving the Red Cross and NVOAD responsibilities under the ESF-6 section of the Framework. The 2008 National Response Framework, which revised the National Response Plan, clarified some aspects of the Red Cross’s role that had been problematic during the Katrina response. Under the 2008 ESF-6 section of the Framework, the Red Cross has a unique federally designated role as a support agency to FEMA for mass care. As noted in our recent report, the Red Cross was previously designated as the primary agency for mass care under ESF-6 in the 2004 National Response Plan, but the Red Cross’s role was changed under the 2008 Framework to that of a support agency. This role change was made in large part because FEMA and the Red Cross agreed—in response to issues that arose during Katrina—that the primary agency responsible for coordinating mass care nationwide needs to be able to direct federal resources. As a support agency under ESF-6, the Red Cross helps FEMA and the states coordinate mass care activities in disasters. In particular the Red Cross is charged with providing staff and specially trained liaisons to work at FEMA’s regional offices and other locations, and providing subject matter expertise on mass care planning, preparedness, and response. In addition, the Red Cross is expected to take the lead in promoting cooperation and coordination among government and national voluntary organizations that provide mass care during a disaster, although it does not direct other voluntary organizations in this role. (See fig. 3.) ESF-6 also acknowledges the Red Cross’s separate role as the nation’s largest mass care service provider, which is distinct from its role under the Framework. When providing mass care services, the Red Cross acts on its own behalf and not on behalf of the federal government, according to the ESF-6. In recent months, the Red Cross has reported a significant budget deficit that has led it to substantially reduce its staff, including those assigned to FEMA and its regional offices, and to seek federal funding for its ESF-6 responsibilities—a major policy shift for the organization. According to Red Cross officials, the Red Cross has experienced major declines in revenues in recent years, and the organization reported a projected operating budget deficit, for fiscal year 2008, of about $150 million. To address this shortfall, in early 2008 the Red Cross reduced the number of its staff by about 1,000, with most of these staffing cuts made at its national headquarters and in service areas, in departments that support all Red Cross functions, such as information technology, human resources, and communications. These cuts included eliminating its full-time staff at FEMA’s 10 regional offices and reducing staff that supported state emergency management agencies from 14 to 5. While it is too soon to tell the impact of these changes, Red Cross officials we spoke with told us these staffing cutbacks will not affect its ability to provide mass care services. For example, several positions were also added to its Disaster Services unit to support local chapters’ service delivery, according to Red Cross data, including area directors and state disaster officers—a new position at the Red Cross. However, with regard to its ESF-6 responsibilities, Red Cross officials also said that while the organization will continue to fulfill its ESF-6 responsibilities, it is changing the way it staffs FEMA’s regional offices during disasters by assigning these responsibilities, among others, to state disaster officers and using trained volunteers to assist in this role. According to the Red Cross, its costs for employing a full-time staff person in each FEMA regional office and for staffing its headquarters to support federal agencies during disasters is $7 million annually, for an operation that the Red Cross says is no longer sustainable. Consequently, in May 2008 testimony before the Senate Committee on Homeland Security and Governmental Affairs, the Red Cross requested that Congress authorize and appropriate funding to cover these positions and responsibilities under the ESF-6. In addition, the Red Cross requested $3 million to assist it in funding its role of integrating the mass care services provided by the nongovernmental sector, for a total of $10 million requested. In addition to the Red Cross, NVOAD is also designated as a support agency under the 2008 ESF-6 section of the Framework, as it was in the previous national plan. In its role as a support agency for mass care, NVOAD is expected to serve as a forum enabling its member organizations to share information, knowledge, and resources throughout a disaster; it is also expected to send representatives to FEMA’s national response center to represent the voluntary organizations and assist in disaster coordination. A new element in the 2008 ESF-6 is that voluntary organizations that are members of NVOAD are also specifically cited in ESF-6 under NVOAD, along with descriptions of their services or functions in disaster response. According to NVOAD and FEMA officials, listing the individual NVOAD members and their services in the ESF-6 does not change organizations’ expected roles or create any governmental obligations for these organizations to respond in disasters, but rather recognizes that NVOAD represents significant resources available through the membership of the voluntary organizations. While the Red Cross’s role for ESF-6 has been changed from that of a primary agency under the National Response Plan to that of a support agency under the new Framework, the Catastrophic Incident Supplement still reflects its earlier role, requiring the Red Cross to direct federal mass care resources. The Supplement provides the specific operational framework for responding to a catastrophic incident, in accordance with federal strategy. When the Supplement was issued, in 2006, the Red Cross was the primary agency for coordinating federal mass care assistance and support for the mass care section of ESF-6 under the National Response Plan. As previously mentioned, in January 2008 the Red Cross’s role under ESF-6 changed from that of a primary agency to that of a support agency, partly because the Red Cross lacks the authority to direct federal resources. The Supplement has not yet been updated to reflect this recent change in the Red Cross’s role. However, FEMA and Red Cross officials agreed that in a catastrophic incident, the Red Cross would serve as a support agency for mass care—not as the lead agency—and therefore would not be responsible for directing federal resources. According to FEMA, in a catastrophic incident, the management, control, dispensation, and coordination of federal resources will change, shifting this responsibility from the Red Cross to FEMA, so as to be consistent with the National Response Framework and the ESF-6. In addition to describing its ESF-6 support agency responsibilities in a catastrophic disaster, the Supplement lays out the mass care services the Red Cross would provide in a catastrophic disaster—acting as a private organization—and FEMA and Red Cross officials agreed that the Red Cross would continue to provide these services as part of its private mission, regardless of the change to its role in the ESF-6 or any future revisions to the Supplement. The Red Cross’s services and actions as a private service provider are integrated into the Supplement for responding to catastrophic disasters. In an event of catastrophic magnitude, the Red Cross is expected to directly provide mass care services to disaster victims, such as meals and immediate sheltering services to people who are denied access to their homes. The Supplement also includes the Red Cross in a schedule of actions that agencies are expected to automatically take in response to a no-notice disaster, such as a terrorist attack or devastating earthquake. For example, within 2 hours after the Supplement is implemented, the Red Cross is expected to inventory shelter space in a 250-mile radius of the disaster using the National Shelter System, dispatch specially trained staff to assess needs and initiate the Red Cross’s national response, coordinate with its national voluntary organization partners to provide personnel and equipment, and deploy Red Cross kitchens and other mobile feeding units. However, according to the ESF-6, in providing these mass care services, the Red Cross is acting on its own behalf and not on the behalf of the federal government or other governmental entity, and the Supplement similarly states that the Red Cross independently provides mass care services as part of its broad program of disaster relief. According to Red Cross officials, if the Supplement were implemented, the Red Cross would continue providing the same mass care services that it has always provided as a private organization. FEMA officials agreed that its expectations of the services the Red Cross would provide in a catastrophic event have not changed, and that its role as a service provider has not been affected by the changes to the ESF-6. According to FEMA, FEMA will augment the Red Cross’s resources in a catastrophic disaster, and the two organizations are working together to develop a memorandum of agreement to ensure that the Red Cross is provided with adequate federal support for logistics, human resources, and travel in a catastrophic event. Although FEMA is charged with revising the Supplement under the Post- Katrina Reform Act, agency officials told us that the agency does not currently have a time frame for updating the Supplement and does not have an interim agreement documenting FEMA’s and the Red Cross’s understanding of the Red Cross’s role as a support agency under the Supplement. FEMA officials told us that the agency was revising the 2004 Catastrophic Incident Annex—a brief document that establishes the overarching strategy for a national response to this type of incident—but that it does not yet have a time frame for updating the more detailed Supplement, which provides the framework for implementing this strategy, although the agency told us that it is in the process of establishing a review timeline. According to FEMA, future revisions to the Supplement will shift responsibility for directing federal mass care resources from the Red Cross to FEMA, in order to remain consistent with the National Response Framework and ESF-6. Furthermore, FEMA and the Red Cross told us that they have a mutual understanding of the Red Cross’s role as a support agency in a catastrophic disaster. However, this understanding is not currently documented. As the experience in responding to Hurricane Katrina demonstrated, it is important to have a clear agreement on roles and responsibilities. Crafting such agreements in writing ahead of time—before the need to respond to a catastrophic event—would help clarify potentially unknown sources of misunderstanding and communicate this understanding not just to FEMA and the Red Cross, but also to FEMA’s many support agencies for ESF-6 and the Red Cross’s partner organizations in the voluntary sector. There is also precedent for having an interim agreement on changed roles: In 2007, while the National Response Plan was being revised, FEMA and the Red Cross developed an interim agreement on roles and responsibilities that set forth the Red Cross’s shift from primary to support agency. In response to weaknesses in service delivery that became evident during Hurricane Katrina, the American Red Cross, The Salvation Army, the Southern Baptist Convention, and Catholic Charities have acted to expand their service coverage and strengthen key aspects of their structures. The Red Cross has reorganized its chapters and established new partnerships with local community and faith-based organizations, particularly in rural areas with hard-to-reach populations. While Red Cross officials did not expect these improvements to be undermined by the organization’s budget deficit, the effect of recent staff reductions at headquarters and elsewhere remains to be seen. Meanwhile, all four organizations, to varying degrees, have made changes to strengthen their ability to coordinate services by collaborating more on feeding and case management and improving their logistical and communications systems. In recognition of the fact that its service coverage had been inadequate during the 2005 Gulf Coast hurricanes, the Red Cross subsequently reorganized its service delivery structure and initiated or strengthened partnerships with local community organizations—a process that is still ongoing. During Katrina, when approximately 770,000 people were displaced, the Red Cross was widely viewed as not being prepared to meet the disaster’s unprecedented sheltering needs, in part because some areas—particularly rural areas—lacked local chapters or were not offering services; furthermore, the Red Cross had weak relationships with faith- based and other community groups that stepped in during this crisis to assist disaster victims. To address these problems, the Red Cross is implementing two main initiatives: First, to expand and strengthen its service delivery, including its capacity to respond to catastrophic disasters, the Red Cross is reorganizing its field structure by Establishing a more flexible approach to service delivery to accommodate varying needs of diverse communities within the same jurisdiction. According to the Red Cross, the jurisdiction of many chapters consisted of urban, suburban, and rural counties. Previously, chapter services were based on an urban model, but this one-size-fits-all approach, according to the Red Cross, did not well suit the needs and capacities of suburban and rural areas. The Red Cross now differentiates among three service levels, and each chapter can match service levels to the communities within its jurisdiction according to the community’s population density and vulnerability to disasters. As part of this differentiated approach, the chapters also use a mix of methods for providing services—from teams of disaster-trained volunteers to toll-free numbers and the Internet to formal partnerships—depending on the service level needed. Realigning its regional chapter groupings—each consisting of three to eight local chapters—to cover larger geographic areas, additional populations, and better support their local chapters. Regional chapters were established based on factors such as population density, total geographic area, and community economic indicators. According to the Red Cross, streamlining administrative back-office functions, such as human resources and financial reporting, through an organization-wide initiative to reduce duplication will free up chapter resources for service delivery. With this realignment, regional chapters now are expected to provide their local chapters with technical assistance, evaluate local chapters’ overall service delivery capacity, and identify strategies to maximize service delivery, according to the Red Cross. Second, the Red Cross is working to strengthen its local chapters’ relationships with local faith- and community-based organizations so as to help better serve diverse and hard-to-reach populations. During Katrina, the Red Cross lacked such relationships in certain parts of the country, including hurricane-prone areas, and did not consistently serve the needs of many elderly, African-American, Latino, and Asian-American disaster victims and people with disabilities. To remedy this, the Red Cross initiated a new community partnership strategy under which local chapters identify key community organizations as possible disaster response partners and enter into agreements with them on resources to be provided, including reimbursements for costs associated with sheltering disaster victims. The partnership strategy’s goals include improving service to specific communities by overcoming linguistic and cultural barriers; increasing the number of possible facilities for use as shelters, service centers, and warehouses; and enlisting the support of organizations that have relationships with the disabled community. According to Red Cross officials, local chapters around the country have initiated thousands of new partnerships with faith-based and local community organizations. However, because these partnerships are formed at the local chapter level, the national office does not track the exact number of new agreements signed, according to the Red Cross. In addition, the Red Cross has also taken some actions to better address the mass care needs of disaster victims with disabilities—a particular concern during Katrina—although concerns still remain about the nation’s overall preparations for mass care for people with disabilities. For example, the Red Cross developed a shelter intake form to help volunteers determine if a particular shelter can meet an individual’s needs as well as new training programs for staff and volunteers that specifically focus on serving the disabled, as we previously reported. It has also prepositioned items such as cots that can be used in conjunction with wheelchairs in warehouses to improve accessibility to shelters. However, as we reported in February 2008, Red Cross headquarters officials told us that some local chapters were not fully prepared to serve people with disabilities and that it was difficult to encourage local chapters to implement accessibility policies. In the report we also noted that FEMA had hired a disability coordinator to improve mass care services for the disabled, but it had not yet coordinated with the National Council on Disability, as required under the Post-Katrina Act. More specifically, we recommended that FEMA develop a set of measurable action steps, in consultation with the disability council, for coordinating with the council. According to the National Disability Council, while FEMA and the council have met on several occasions to discuss their joint responsibilities under the Post- Katrina Act, FEMA has not yet developed action steps for coordination in consultation with the council. FEMA officials told us they are preparing an update for us on their response to the recommendation. Although the Red Cross recently significantly reduced its staffing levels, the staffing cutbacks were designed to uphold the organization’s delivery of disaster services, according to the Red Cross. Red Cross national officials told us that overall, these and other staffing cuts were designed to leave service delivery intact and that the Red Cross plans to maintain the reorganization of its chapter and service level structure as well as its community partnership initiative. However, since these changes are so recent, it remains to be seen how or whether the cuts and realignment of responsibilities will affect the organization’s post-Katrina efforts to expand and strengthen its service delivery. On the basis of their experiences with large-scale disasters, including Katrina, the national offices, and to some extent the local offices, of the direct service providers in our study reported to varying degrees increasing coordination with each other. In particular, they collaborated more on feeding operations and information sharing and made logistical and communications improvements to prevent future problems, according to organization officials. With regard to mass care services, officials from the national offices of the Red Cross, The Salvation Army, and the Southern Baptist Convention—the three mass care providers in our review—reported increasing their collaboration on delivering mass feeding services. During Katrina, mass care services were duplicated in some locations and lacking in others, partly because voluntary organizations were unable to communicate and coordinate effectively. One reason for this confusion, according to the Southern Baptist Convention, was that many locally based volunteers were unaware that the national offices of the Red Cross and the Southern Baptist Convention had a mutual aid agreement to work with each other on feeding operations and as a result did not coordinate effectively. Since Katrina, the Southern Baptist Convention and the Red Cross have developed a plan to cross-train their kitchen volunteers and combine their core curricula for kitchen training. Similarly, The Salvation Army and the Southern Baptist Convention—who also collaborate on mass feeding services—created a joint training module that cross-trains Southern Baptist Convention volunteers to work in Salvation Army canteens and large Salvation Army mobile kitchens. The two organizations also agreed to continue liaison development. In addition, the voluntary organizations in our study told us that they shared case management information on the services they provide to disaster survivors through the Coordinated Assistance Network (CAN)— which is a partnership among several national disaster relief nonprofit organizations. After September 11, CAN developed a Web-based case management database system that allows participating organizations to reduce duplication of benefits by sharing data about clients and resources with each other following disasters. This system was used in Katrina and subsequent disasters. The Red Cross, The Salvation Army, and the United Way were among the seven original partners that developed and implemented CAN. According to officials from the Red Cross’s national headquarters office, CAN has served as a tool for improving coordination and maintaining consistency across organizations and has also fostered collaboration at the national level among organization executives. An official from Catholic Charities USA told us it has seen a reduction in the duplication of services to clients since it began participating in CAN. Two of the local areas we visited participated in CAN—New York City and Washington, D.C.—and officials from some local voluntary organizations and VOADs in these two cities said they participate in CAN. In New York City, Red Cross officials said CAN was used to support the Katrina victims who were evacuated to the area. Catholic Charities officials told us that following September 11, CAN helped ease the transition between the Red Cross’s initial case management services and longer-term services provided by other organizations. In addition, an official from the local VOAD said using CAN is a best practice for the sector. The three voluntary organizations that provide mass care services have taken steps to improve their supply chains by coordinating more with each other and FEMA to prevent the breakdown in logistics that had occurred during Hurricane Katrina, according to officials we spoke with. In responding to Hurricane Katrina, the Red Cross, FEMA, and others experienced difficulties determining what resources were needed, what was available, and where resources were at any point in time, as we and others reported. Since then, the Red Cross and FEMA’s logistics department have communicated and coordinated more on mass care capacity, such as the inventory and deployment of cots, blankets, and volunteers, according to national office Red Cross officials. The Red Cross also said the logistics departments of the Red Cross and FEMA meet regularly and that the two organizations are working on a formal agreement and systematically reviewing certain areas, such as sharing information on supplies and warehousing. In addition to the Red Cross, the Southern Baptist Convention and The Salvation Army made changes to improve their supply chain management systems. In Katrina, the Southern Baptist Convention experienced a breakdown in the system that prevented it from replenishing its depleted mobile kitchen stock, according to officials from the organization. While FEMA ultimately helped with supplies, the Southern Baptist Convention has since collaborated with the Red Cross and The Salvation Army to develop a supply chain management system to minimize logistical problems that could interfere with its ability to provide feeding services, according to national office officials from the Southern Baptist Convention. To ensure that disaster staff and volunteers can receive and share information during a disaster, the voluntary organizations in our review told us they had to varying degrees strengthened their communications systems since Katrina. Hurricane Katrina destroyed core communications systems throughout the Gulf Coast, leaving emergency responders and citizens without a reliable network needed for coordination. Since then, to prevent potential loss of communication during disasters, the Red Cross increased the number of its disaster response communications equipment and prepositioned emergency communications response vehicles that had Global Positioning Systems. According to organization officials, the Red Cross prepositioned communications equipment in 51 cities across the country, with special attention to hurricane-prone areas. The Red Cross also provided some communications equipment to the Southern Baptist Convention for its mobile kitchens and trucks. According to Red Cross national office officials, the organization’s long-term goal for communications is to achieve interoperability among different systems such as landline, cellular, and radio networks. Furthermore, the Red Cross reported that it can communicate with FEMA and other federal agencies during a disaster through its participation in the national warning system and its use of a high-frequency radio program also used by federal agencies; in contrast, communication with nonfederal organizations is through liaisons in a facility or by e-mail or telephone. In addition to these Red Cross efforts, the Southern Baptist Convention enabled its ham radio operators throughout the country to directly access its national disaster operations center through a licensed radio address, began including a communications officer in each of its incident command teams, and established a standard communications skill set for all of its local affiliates, among other improvements. Local Salvation Army units also reported upgrading their communications system since Katrina. In Washington, D.C., The Salvation Army began developing an in-house communications system in the event that cellular and satellite communications networks are down, and in Miami, The Salvation Army equipped its canteens with Global Positioning Systems to help disaster relief teams pinpoint locations if street signs are missing due to a disaster. In addition, Catholic Charities in Miami purchased new communications trailers with portable laptop computer stations, Internet access, a generator, and satellite access, according to a Catholic Charities official. Although initial assessments do not yet fully capture the collective capabilities of major voluntary organizations, the evidence suggests that without government and other assistance, a worst-case large-scale disaster would overwhelm voluntary organizations’ current mass care capabilities in the metropolitan areas we visited. The federal government and voluntary organizations have started to identify sheltering and feeding capabilities. However, at this point most existing assessments are locally or regionally based and do not provide a full picture of the nationwide capabilities of these organizations that could augment local capabilities. Furthermore, attempts to develop comprehensive assessments are hindered by the lack of standard terms and measures in the field of mass care. In the four metro areas we visited, the American Red Cross, The Salvation Army, and the Southern Baptist Convention were able to provide information on their local sheltering and feeding resources, and in large- scale disasters their substantial nationwide resources could be brought to bear in an affected area. Nevertheless, the estimated need for sheltering and feeding in a worst-case large-scale disaster—-such as a Katrina-level event—would overwhelm these voluntary organizations. We also found, however, that many local and state governments in the areas we visited, as well as the federal government, are planning to use government employees and private sector resources to help address such extensive needs. Red Cross and FEMA officials also told us that in a catastrophic situation, assistance will likely be provided from many sources, including the general public, as well as the private and nonprofit sectors, that is not part of any prepared or planned response. Because the assessment of capabilities among multiple organizations nationwide is an emerging effort—largely post-Katrina—it does not yet allow for a systematic understanding of the mass care capabilities that voluntary organizations can bring to bear to address large-scale disasters in the four metropolitan areas in our review. Assessments help organizations identify the resources and capabilities they have as well as potential gaps. To assess capabilities in such disasters in any metro area, it is necessary to have information not only on an organization’s local capabilities but also its regional and nationwide capabilities. Under this scalable approach—which is a cornerstone of the Framework and the Catastrophic Supplement as well—local voluntary organizations generally ramp up their capabilities to respond to large-scale disasters, a process that is shown in figure 4. Voluntary organizations are generally able to handle smaller disasters using locally or regionally based capabilities, but in a large-scale disaster their nationwide capabilities can be brought to bear in an affected area. While our focus in this review is on voluntary organizations’ resources and capabilities, governments at all levels also play a role in addressing mass care needs in large-scale disasters. In anticipation of potential disasters, the federal government and the Red Cross have separately started to assess sheltering and feeding capabilities, but these assessments involve data with different purposes, geographic scope, and disaster scenarios. Consequently they do not yet generate detailed information for a comprehensive picture of the capabilities of the voluntary organizations in our review. (See table 5.) FEMA is currently spearheading two initiatives that to some extent address the mass care capabilities of voluntary organizations in our review. FEMA’s Gap Analysis Program, which has so far looked at state capabilities in 21 hurricane-prone states and territories, has begun to take stock of some voluntary organizations’ capabilities. According to FEMA officials, states incorporated sheltering data from organizations with which they have formal agreements. In the four metro areas we visited, however, we found that—unlike the Red Cross—The Salvation Army and the Southern Baptist Convention did not generally have formal agreements with the state or local government. For this reason, it is unlikely that their resources have been included in this first phase, according to FEMA officials. Also, this initial phase of analysis did not assess feeding capabilities outside of those available in shelters, a key facet of mass care for which voluntary organizations have significant resources. Another form of assessment under way through FEMA and the Red Cross—the National Shelter System database—which collects information on shelter facilities and capacities nationwide—largely consists of shelters operated by the Red Cross, and states have recently entered new data on non-Red Cross shelters as well. While The Salvation Army and other voluntary spokesmen told us they have shelters at recreation centers and other sites that are not listed in this database, FEMA officials told us the accuracy of the shelter data is contingent upon states reporting information into the system and updating it frequently. FEMA has offered to have its staff help states include non-Red Cross shelter data in the database and has also provided or facilitated National Shelter System training in 26 states and 3 territories. As of July 2008, shelters operated by the Red Cross account for about 90 percent of the shelters listed, and according to FEMA officials, 47 states and 3 territories have entered non-Red Cross shelter data into the database. In commenting on the draft report, FEMA noted that in addition to these assessments, the agency is conducting catastrophic planning efforts to help some states develop sheltering plans for responding to certain disaster scenarios. For example, the states involved in planning efforts for the New Madrid earthquake are developing plans to protect and assist their impacted populations and identifying ways to augment the resources provided by voluntary organizations and the federal government. Of the voluntary organizations in our review, the Red Cross is the only one that has, to date, undertaken self-assessments of its capabilities. First, its annual readiness assessments of individual local chapters provide an overview of locally based capabilities for disasters of various scales and identify shortfalls in equipment and personnel for each chapter. Second, the Red Cross has also conducted comprehensive assessments of its sheltering and feeding capabilities in six high-risk areas of the country as part of its capacity-building initiative for those areas. Focusing on the most likely worst-case catastrophic disaster scenario for each area, this initiative reflects the Red Cross’s primary means of addressing its responsibilities under the federal Catastrophic Supplement. Red Cross officials said that while they incorporated data from The Salvation Army and the Southern Baptist Convention into this assessment, many of their other partner organizations were unable to provide the Red Cross with such information. The Salvation Army and Southern Baptist Convention officials with whom we spoke said they have not yet assessed their organizations’ nationwide feeding capabilities, although they were able to provide us with data on the total number of mobile kitchens and other types of equipment they have across the country. Also underlying the problem of limited data on voluntary organizations is the lack of standard terminology and measures for characterizing mass care resources. For example, voluntary organizations do not uniformly use standard classifications for their mobile kitchens. This makes it difficult to quickly assess total capacity when dozens of mobile kitchens from different organizations arrive at a disaster site or when trying to assess capabilities. While DHS requires all federal departments and agencies to adopt standard descriptions and measures—a process defined in NIMS as resource typing—voluntary organizations are not generally required to inventory their assets according to these standards. Red Cross officials report that their organization does follow these standards, but The Salvation Army and Southern Baptist Convention officials said their organizations currently do not, although the latter has taken steps to do so. Specifically, national Southern Baptist officials said they are working with the Red Cross and The Salvation Army to standardize their mobile kitchen classifications using NIMS resource definitions. We also found indications of change at the local level in California with regard to The Salvation Army. Officials there told us they used NIMS resource typing to categorize the organization’s mobile kitchens in the state and that they have provided these data to California state officials. Meanwhile, FEMA is also working with NVOAD to standardize more ESF-6 service terms, in accordance with its responsibilities under the Post- Katrina Reform Act. This initiative currently includes terms and definitions for some mass care services such as shelter management and mobile kitchens. However, FEMA officials said it may be several years before additional standard terms and measures are fully integrated into disaster operations. Although systematic assessments of mass care capabilities are limited, it is evident that in large-scale, especially worst-case, catastrophic disasters, the three mass care voluntary organizations would not likely be able to fulfill the need for sheltering and feeding in the four metropolitan areas in our review without government and other assistance, according to voluntary organization officials we interviewed as well as our review of federal and other data. Red Cross officials, as well as some officials from other organizations we visited, generally agreed that they do not have sufficient capabilities to single-handedly meet all of the potential sheltering and feeding needs in some catastrophic disasters. While the mass care resources of these voluntary organizations are substantial, both locally and nationally, our analysis indicates a likely shortage of both personnel and assets. Anticipating such shortages, the voluntary organizations we spoke with are making efforts to train additional personnel. According to local, state, and federal government officials we spoke with, government agencies—which play key roles in disaster response—told us that they were planning to use government employees and private sector resources in such disasters in addition to the resources of voluntary organizations. Red Cross and FEMA officials also told us that in a catastrophic situation, assistance will likely be provided from many sources, including the general public, as well as the private and nonprofit sectors, that are not part of any prepared or planned response. Within the past few years, DHS, the Red Cross, and others have developed estimates of the magnitude of mass care services that might be needed to respond to worst-case catastrophic disasters, such as various kinds of terrorist attacks or a hurricane on the scale of Katrina or greater. The estimates vary according to the type, magnitude, and location of such disasters and are necessarily characterized by uncertainties. (See table 6.) Although sheltering resources are substantial, in a worst-case large-scale disaster, the need for sheltering would likely exceed voluntary organizations’ current sheltering capabilities in most metro areas in our study, according to government and Red Cross estimates of needs. The preponderance of shelters for which data are available are operated by the Red Cross in schools, churches, community centers, and other facilities that meet structural standards, but The Salvation Army and other organizations also operate a small number of sheltering facilities as well. The Red Cross does not own these shelter facilities, but it either manages the shelters with its own personnel and supplies under agreement with the owners or works with its partner organizations and others to help them manage shelters. At the national level, the Red Cross has identified 50,000 potential shelter facilities across the country, as noted in the National Shelter System database. In addition, the Red Cross has enough sheltering supplies, such as cots and blankets, to support up to 500,000 people in shelters nationwide. However, while disaster victims can be evacuated to shelters across the country if necessary, as happened after Katrina, Red Cross officials told us they prefer to shelter people locally. In the four metro areas we visited, the Red Cross has identified shelter facilities and their maximum or potential capacities, as shown in table 7. Despite local and nationally available resources, the kinds of large-scale disasters for which estimates of need exist would greatly tax and exceed the Red Cross’s ability to provide sheltering. For example, for a major earthquake in a metropolitan area, DHS estimates that 313,000 people would need shelter, but in Los Angeles—a city prone to earthquakes— Red Cross officials told us they are capable of sheltering 84,000 people locally under optimal conditions. The Red Cross’s own analyses of other types of worst-case disaster scenarios also identified shortages in sheltering capacity in New York and Washington, D.C., as well. For example, for a nuclear terrorist attack in Washington, D.C., the Red Cross estimates that 150,000 people would need sheltering in the National Capital Region and identified a gap of over 100,000 shelter spaces after accounting for existing capabilities. The ability to build or strengthen sheltering capabilities depends on several elements, including the availability of trained personnel and supplies, the condition of shelter facilities, and the particular disaster scenario and location, among other things. Chief among these constraints, according to national and local Red Cross officials, is the shortage of trained volunteers. Red Cross officials said there are 17,000 volunteers and staff in the Red Cross’s national disaster services human resources program that have received extensive training in sheltering as of May 2008 and an additional 16,000 Red Cross workers trained in mass care that can be deployed across the country. However, local chapters are still expected to be self-sufficient for up to 5 days after a large-scale disaster occurs, while staff and volunteers are being mobilized nationwide. According to the Red Cross’s annual chapter assessments, personnel shortages limit the ability of all four chapters we visited to manage the local response beyond certain levels. In New York City, Red Cross officials noted that it has identified enough shelters to optimally accommodate more than 300,000 people, but that it has only enough personnel locally to simultaneously operate 25 shelters, for a total sheltering capability of 12,500 people. The Red Cross is working with its local chapters to develop action plans to address personnel shortages. For example, in New York, the Red Cross has set a goal of recruiting 10,000 additional volunteers—in addition to the 2,000 it had as of December 2007 to operate shelters—and plans to attract 850 new volunteers each quarter. In addition, supply chain and warehousing challenges affect the ability to maximize sheltering capabilities. According to Red Cross officials, it is not necessary to maintain large inventories of some supplies, such as blankets, if they can be quickly and easily purchased. However, obtaining other supplies such as cots requires a long lead time since they may need to be shipped from as far away as China, a fact that can be particularly problematic in no- notice events such as major earthquakes. While purchasing supplies as needed can reduce warehousing costs, this approach can also be affected by potential disruptions in the global supply chain, according to officials we spoke with. In DHS’s Catastrophic Incident Supplement, an underlying assumption is that substantial numbers of trained mass care specialists and managers will be required for an extended period of time to sustain mass care sheltering and feeding activities after a catastrophic disaster. In recognition of the need to increase the number of trained personnel to staff existing shelters, state and local governments in the four metropolitan areas we visited told us they are planning to train and use government employees to staff shelters in such large-scale disasters. For example, in New York City, the Office of Emergency Management is preparing to use trained city government employees and supplies to provide basic sheltering care for up to 600,000 residents in evacuation shelters. The city-run evacuation shelters would be located at schools for the first few days before and after a catastrophic hurricane. After this initial emergency plan is implemented, the city expects the Red Cross to step in and provide more comprehensive sheltering services to people who cannot return to their homes. As Red Cross officials told us, the New York City government is the only local organization with the potential manpower to staff all the available shelters, but the Red Cross will also provide additional personnel to help operate some of the city’s evacuation shelters and special medical needs shelters. As of November 2007, 22,000 New York City employees had received shelter training through a local university, with some additional training from the Red Cross. Similarly, in Los Angeles, as of January 2008, approximately 1,400 county employees had been trained in shelter management so far, and the Red Cross has set a goal to train 60,000 of the county’s 90,000 employees. In addition, state governments have resources, equipment, and trained personnel that can be mobilized to provide mass care, according to state and FEMA officials. States can also request additional resources from neighboring states through their mutual aid agreements. According to Red Cross and FEMA officials, in a catastrophic disaster, sheltering assistance would likely be provided from many sources, such as churches and other community organizations, as occurred in the aftermath of the Katrina hurricanes, and they also noted that such assistance was not part of any prepared or planned response. Although voluntary organizations’ feeding resources are also substantial, the feeding needs in a worst-case large-scale disaster would likely exceed the voluntary organizations’ current feeding capabilities for most metro areas in our review, according to government and Red Cross estimates of needs. In their feeding operations, voluntary organizations make use of mobile kitchens or canteens to offer hot meals and sandwiches, prepackaged meals known as meals-ready-to-eat (MRE), and hot and cold meals prepared by contracted private vendors. The Red Cross, The Salvation Army, and the Southern Baptist Convention have locally based resources for feeding disaster victims in the four metro areas we visited. For example, The Salvation Army and the Southern Baptist Convention have mobile kitchens stationed in close proximity to each of the four metro areas we visited. Some of these mobile kitchens are capable of producing up to 25,000 meals per day. The Red Cross also has feeding resources in these metro areas including prepackaged meals, vehicles equipped to deliver food, and contracts with local vendors to prepare meals. In addition, by mobilizing nationwide resources, such as mobile kitchens and prepackaged meals, the Red Cross reports that it currently has the capability, together with the Southern Baptist Convention, to provide about 1 million meals per day—about the maximum number of meals served per day during Katrina. Across the country, The Salvation Army has 697 mobile kitchens and other specialized vehicles and the Southern Baptist Convention has 117 mobile kitchens that can be dispatched to disaster sites, according to organization officials. Furthermore, Red Cross officials also said they have 6 million prepackaged meals stockpiled in warehouses across the country that can be quickly distributed in the first few days after a disaster, before mobile kitchens are fully deployed to the affected area. Red Cross officials also said that they can tap into additional food sources, such as catering contracts with food service providers, during prolonged response efforts. Despite these substantial resources nationwide, in a worst-case large-scale disaster, feeding needs would still greatly exceed the current capabilities of these voluntary organizations, according to government and Red Cross estimates of needs under different scenarios. For example, DHS estimates that feeding victims of a major earthquake would require approximately 1.5 million meals per day, but this need is considerably greater than the 1 million meals per day currently possible, leaving a shortfall of about 500,000 meals per day. According to state government estimates, the gap is even larger for other types of disaster scenarios. For example, according to Florida state estimates, a category IV hurricane could produce the need for 3 million meals per day, which is considerably greater than the 1 million meals per day that the Red Cross can provide. In addition, a nuclear terrorist attack in Washington, D.C., would require 300,000 meals per day more than the Red Cross’s current capabilities allow, according to the Red Cross’s internal assessments. The ability to build or strengthen feeding capabilities depends on the availability of trained personnel, equipment, and supplies. As with sheltering, some voluntary organization officials told us that the key constraint is the limited availability of trained personnel. Feeding services are a labor-intensive process. For example, Southern Baptist Convention officials said it takes a team of 50 trained people to operate a large mobile kitchen, and an additional 50 people are needed every 4 days because teams are rotated in and out of disaster sites. Southern Baptist Convention officials said that although they have 75,000 trained volunteers in their organization, there are still not enough trained volunteers, especially experienced team leaders. They said the shortage of experienced team leaders is particularly challenging because mobile kitchens cannot be deployed without a team leader. The voluntary organizations are addressing these personnel shortages by promoting training programs for new staff and volunteers and also utilizing additional unaffiliated, untrained volunteers who join during response efforts. For example, according to The Salvation Army, its national disaster training program has trained more than 16,000 personnel throughout the United States since 2005. In addition, supply disruptions are also a major concern in large- scale disasters because mobile kitchens and other feeding units need to be restocked with food and supplies in order to continue providing meals. Red Cross officials told us they are in the process of expanding their food supply by contracting with national vendors to provide additional meals during disasters. In addition, as previously mentioned, the Southern Baptist Convention faced problems resupplying its mobile kitchens during the response to Hurricane Katrina and has since taken steps to develop a supply chain management system with the Red Cross and The Salvation Army to minimize future logistical problems. In the four metro areas we visited, some state and local government officials we met with told us they are planning to fill these gaps in feeding services by contracting with private sector providers. In Florida, the state is planning to use private sector contractors to fill gaps in feeding services in preparation for a catastrophic hurricane. A Florida state official said obtaining and distributing the estimated 3 million meals per day that would be needed is a huge logistical challenge that would require the state to use 20 to 40 private vendors. In Washington, D.C., the emergency management officials said they are also establishing open contracts with private sector providers for additional prepackaged meals and other food supplies. As a result of FEMA’s new responsibilities under the Post-Katrina Act and its new role as the primary agency for mass care under the National Framework, FEMA officials have told us that the agency was working to identify additional resources for situations in which the mass care capabilities of government and voluntary organizations are exceeded. FEMA officials said that FEMA has developed contracts with private companies for mass care resources for situations in which the needs exceed federal capabilities. After Katrina, FEMA made four noncompetitive awards to companies for housing services. Since then, contracts for housing services have been let through a competitive process and broadened in scope so that if a disaster struck now they could also include facility assessment for shelters, facility rehabilitation—including making facilities accessible—feeding, security, and staffing shelters. According to the FEMA official in charge of these contracts, the contracts gave the federal government the option of purchasing the resources it needs in response to disasters. FEMA officials said, however, that they prefer using federal resources whenever possible because private sector contract services are more expensive than federal resources. FEMA also has a mass care unit that is responsible for coordinating ESF-6 partner agency activities and assessing state and local government shelter shortfalls. According to FEMA, the members of the mass care unit based in Washington, D.C., are composed of subject matter experts trained in various mass care operations, including sheltering. Mass care teams have been deployed to assist with sheltering operations, such as the California wildfires of 2007 and the Iowa floods of 2008. FEMA regional offices have also begun to hire staff dedicated to mass care. Shortages in trained personnel, identifying and dedicating financial resources for preparedness activities, and strengthening connections with government agencies continue to challenge the voluntary organizations in our study. Voluntary organizations in our review continue to face shortages in trained staff to work on preparing for future disasters, among other things, and volunteers to help provide mass care services, even though voluntary organizations and government agencies we met with made efforts to train additional personnel. Identifying and dedicating financial resources for disaster planning and preparedness become increasingly difficult as voluntary organizations also strive to meet competing demands. In addition, the level of involvement and interaction of voluntary organizations in disaster planning and coordination with government agencies is an ongoing challenge, even for the American Red Cross, which has recently changed the way it works with FEMA and state governments. The most commonly cited concern that voluntary organizations have about their capabilities is the shortage of trained staff or volunteers, particularly for disaster planning and preparedness, according to voluntary organization officials. State and local governments are primarily responsible for preparing their communities to manage disasters locally— through planning and coordination with other government agencies, voluntary organizations, and the private sector. However, voluntary organization officials we met with told us it was difficult for them to devote staff to disaster planning, preparedness activities, and coordination. At the national level, the Southern Baptist Convention and Catholic Charities USA maintained small staffs of one or two people that work on disaster preparedness and coordination, which they said made preparedness and coordination for large-scale disasters challenging. At the local level, we also heard that staff who were responsible for disaster planning for their organization had multiple roles and responsibilities, including coordinating with others involved in disaster response as well as daily responsibilities in other areas. This was particularly an issue for the faith-based organizations, such as The Salvation Army and the Southern Baptist Convention, for whom disaster response, while important, is generally ancillary to their primary mission. For example, in Florida the state Southern Baptist Convention has a designated staff member solely focused on disaster relief and recovery, but other state Southern Baptist Conventions expect disaster staff to split their time among other responsibilities, such as managing the men’s ministry, and generally do not have the time or ability to interact with the state emergency management agency, according to an official from the Florida Southern Baptist Convention. Similarly, a Salvation Army official in Miami commented that The Salvation Army could do more if they had a dedicated liaison employee to help with their local government responsibilities, including coordinating the provision of mass care services, which the organization provides in agreement with the local government. According to a national official from Catholic Charities USA, local Catholic Charities that provide disaster services usually have one employee to handle the disaster training and response operation, in addition to other responsibilities. While it would be ideal for all local Catholic Charities to have at least two or three employees trained in disaster response, she said, the organization currently does not have resources for this training. In New York and Los Angeles, officials from Catholic Charities confirmed that the lack of personnel capable of responding to disasters is an ongoing challenge for their organization. These shortages in trained staff affected the ability of some local voluntary organizations and VOADs we met with to develop and update business continuity and disaster response plans, according to officials from these organizations. In Los Angeles, an official from Catholic Charities told us that it does not have a disaster or continuity-of-operations plan tailored to the organization’s needs, because it does not have dedicated disaster staff to develop such plans. Voluntary organization officials in Miami emphasized the importance of having such continuity plans, because after Hurricanes Katrina and Wilma struck Florida in 2005, most of the local voluntary organizations in the area were unable to provide services due to damage from the storm. In addition, organizations and VOADs we visited said that they struggle to update their disaster response plans. For instance, in Los Angeles, an official from the local VOAD told us that the organization’s disaster response plan needed to be updated, but that the VOAD has not addressed this need because of staffing limitations. This official also told us the VOAD was planning to hire two full-time staff sometime in 2008 using federal pandemic influenza funds received through the county public health department. In addition, as mentioned earlier, voluntary organization officials both nationally and locally told us that they face a shortage of trained volunteers, which limits their ability to provide sheltering and feeding in large-scale, and especially catastrophic disasters. This continues to be an ongoing concern despite the efforts of voluntary organizations and government agencies to build a cadre of trained personnel. Identifying and dedicating funding for disaster preparedness is a challenge for voluntary organizations in light of competing priorities, such as meeting the immediate needs of disaster survivors. Officials from voluntary organizations in our review told us that they typically raised funds immediately following a disaster to directly provide services, rather than for disaster preparedness—or, for that matter, longer-term recovery efforts. Although the Red Cross raised more than $2 billion to shelter, feed, and provide aid to disaster survivors following Katrina, the Red Cross recently acknowledged that it is less realistic to expect public donations to fund its nationwide disaster capacity-building initiatives. Similarly, the biggest challenge for Catholic Charities USA is identifying funds for essential disaster training—a key aspect of preparedness, according to an official. At the local level, an official from Catholic Charities in New York noted also that incoming donations tend to focus on funding the initial disaster response. As we previously reported, vague language and narrowly focused definitions used by some voluntary organizations in their appeal for public donations following the September 11 attacks contributed to debates over how funds should be distributed, particularly between providing immediate cash assistance to survivors or services to meet short- and long-term needs. An indication of this continuing challenge is that officials from Catholic Charities in Washington, D.C., and New York reported that they are still working with September 11 disaster victims and communities, and that they struggle to raise funds for long- term recovery work in general. Besides public donations, while federal grant programs could provide another potential source of preparedness funding for voluntary organizations, local voluntary organization officials told us it was difficult to secure funding through these programs without support from the local government. Local voluntary organizations officials we met with said that federal funding for disaster preparedness, such as the Urban Area Security Initiative Grant Program, could be useful in helping their organization strengthen their capabilities. For example, such grants could be used to coordinate preparedness activities with FEMA and other disaster responders, better enable voluntary organizations to develop continuity of operations plans, and train staff and volunteers. However, although voluntary organizations are among those that play a role in the National Response Framework—especially in relation to ESF-6—these organizations received little to no federal funding through programs such as the Homeland Security Grant Programs, according to some local voluntary organization and VOAD officials we visited. Under most of these grants, states or local governments are the grant recipients, and other organizations such as police and fire departments can receive funds through the state or local governments. Of the local voluntary organizations and VOADs in our study, two Red Cross chapters received DHS funding in recent years, according to the Red Cross. In Los Angeles, Red Cross officials told us that the chapter had to be sponsored and supported by the local government in order to receive DHS funding for shelter equipment and supplies. While the director of FEMA’s grant office told us that FEMA considered voluntary organizations as among the eligible subgrantees for several preparedness grants under the Homeland Security Grant Program, the grant guidance does not state this explicitly. According to fiscal year 2008 grant guidance, a state-designated administrating agency is the only entity eligible to formally apply for these DHS funds. The state agency is required to obligate funds to local units of government and other designated recipients, but the grant guidance does not define what it means by “other designated recipient.” In addition, FEMA strongly encourages the timely obligation of funds from local units of government to other subgrantees, as appropriate, but possible subgrantees are not identified. State agencies have considerable latitude in determining how to spend funds received through the grant program and which organizations to provide funds to, according to the FEMA grant director. However, for fiscal year 2005, approximately two-thirds of Homeland Security Grant Program funds were dedicated to equipment—such as personal protective gear, chemical and biological detection kits, and satellite phones—according to DHS, while 18 percent were dedicated to planning activities. An official from FEMA’s grants office told us that following the September 11 attacks, the grant program focused on prevention and protection from terrorism incidents, but it has evolved since Katrina. According to this official, the fiscal year 2008 grant guidance encourages states to work with voluntary organizations, particularly for evacuations and catastrophic preparedness. Furthermore, this official said it is possible that DHS grant funding has not yet trickled down to local voluntary organizations. It is possible that the tendency of DHS funding programs to focus on equipment for prevention and protection rather than on preparedness and planning activities could also shift as states and localities put equipment and systems into place and turn to other aspects of preparedness. Local VOADs can play a key role in disaster preparation and response through interactions with local emergency management agencies of local governments, although the local VOADS in the areas we visited varied in their ability and approach to working with local governments on disasters. Like NVOAD, local VOADs are not service providers. Instead, like NVOAD nationally, local VOADs play an important role in coordinating response and facilitating relationship building in the voluntary sector at the local level, according to government officials. Generally, most of the voluntary organizations in the locations we visited were members of their local VOADs. Several local government emergency managers told us they relied on the local VOADs as a focal point to help them coordinate with many voluntary organizations during disasters. Some local VOADs in our review met regularly and were closely connected to the local governmental emergency management agency—including having seats at the local emergency operations centers. More specifically, the Red Cross was a member of the local VOADs in the areas we visited. It also directly coordinated with government agencies during a disaster and had a seat at the local emergency operations center in all four locations. In New York and Miami, The Salvation Army units were VOAD members and had seats as well. Other VOADs were less active and experienced and were not as closely linked to governmental response. In Washington, D.C., the local VOAD has struggled to maintain a network and continually convene since its inception, according to the current VOAD Chair. In Miami, a local VOAD member told us that the VOAD had little experience with large-scale disasters, because it re-formed after Hurricane Katrina and the area has not experienced major hurricanes since then. In addition, one of the local VOADs was tied to a local ESF-6 mass care operating unit, while others were more closely connected to an emergency function that managed unaffiliated volunteers and donations. The local VOAD in Los Angeles worked with the local government on ESF-6, issues while the VOADs in Miami and Washington, D.C., coordinated with government agencies through managing volunteers and donations during disasters. Currently, NVOAD has few resources to support state and local VOADs. NVOAD’s executive director told us that NVOAD plans to provide state and local VOADS with more support using Web-based tools and guidance, but these plans are hindered by a lack of funding to implement them. As we recently reported, NVOAD is limited in its ability to support its national voluntary organization members, and also lacks the staff or resources to support its affiliated state and local VOADs. Because of these limitations, we recommended that NVOAD assess members’ information needs, improve its communication strategies after disasters, and consider strategies for increasing staff support after disasters. NVOAD agreed with this recommendation and reported that the organization is looking to develop communications systems that take better advantage of current technologies. Since our previous report was issued, NVOAD has expanded its staff from two to four members, some of whom are working to build the collective capacity of state and local VOADs and providing training and technical assistance to state VOADs. At the federal level, although FEMA plays a central role in coordinating with voluntary organizations on mass care and other human services, its difficulties in coordinating activities with the voluntary sector due to staffing limitations were also noted in this earlier report. At the time of our report, FEMA only had one full-time employee in each FEMA region—a voluntary agency liaison—to coordinate activities between voluntary organizations and FEMA, and FEMA liaisons did not have training to assist them in fully preparing for their duties. In light of FEMA’s responsibilities for coordinating the activities of voluntary organizations in disasters under the National Framework, we recommended that FEMA take additional actions to enhance the capabilities of FEMA liaisons in order to fulfill this role. FEMA agreed with our recommendation; however, it is too early to assess the impact of any changes to enhance liaisons’ capabilities. Last, because of its current budget deficit, the Red Cross faces new challenges in fulfilling its ESF-6 role as a support agency. The Red Cross noted that it is working closely with its government partners in leadership positions to manage the transition, following its staffing reductions at FEMA’s regional offices and elsewhere and the subsequent realignment of staff responsibilities. The Red Cross reported that it will monitor the impact of these changes and make adjustments as needed. At the same time, as was previously mentioned, the Red Cross has also requested $10 million in federal funding to cover its staffing and other responsibilities under the ESF-6. According to FEMA officials, FEMA funded 10 regional positions to replace the Red Cross mass care planner positions that were terminated. FEMA also said that while it is too early to assess the long- term impact of these Red Cross staffing changes, FEMA was experiencing some hindrance to effective communications and limits on the Red Cross’s participation in planning at FEMA headquarters, regional offices, and field offices. Regarding the Red Cross strategy of relying on shared resources and volunteers instead of full-time dedicated staff in FEMA regional offices, FEMA officials noted that dedicated staff has proven to be a more reliable source for an ongoing relationship and interaction between agencies. They expressed concern that the lack of dedicated staff, frequent rotations, and inconsistent skill level of volunteers—used instead of full- time Red Cross staff—will hamper communications and may impede coordination efforts. These concerns are similar to the difficulties Red Cross ESF-6 staff faced during Katrina, as we noted in a previous review. Because the American Red Cross and other major voluntary organizations play such a vital role in providing mass care services during large-scale disasters, the importance of having a realistic understanding of their capabilities cannot be underestimated. FEMA has taken initial steps by having states assess their own capabilities and gaps in several critical areas and has completed an initial phase of this analysis. However, this broad assessment effort has yet to fully include the sheltering capabilities of many voluntary organizations and has not yet begun to address feeding capabilities outside of shelters. We understand that when a large-scale disaster strikes, some portion of mass care services will be provided by local voluntary organizations that did not specifically plan or prepare to do so, and that their capabilities cannot be assessed in advance. However, without more comprehensive data from voluntary sector organizations that expect to play a role, the federal government will have an incomplete picture of the mass care resources it could draw upon as well as of the gaps that it must be prepared to fill in large-scale and catastrophic disasters. Unless national assessments more fully capture the mass care capabilities of key providers, questions would remain about the nation’s ability to shelter and feed survivors, especially in another disaster on the scale of Katrina. To the extent that local, state, and federal governments rely on voluntary organizations to step in and care for massive numbers of affected people, the challenges these organizations face in preparing for and responding to rare—but potentially catastrophic—disasters are of national concern. Reliant on volunteers and donations, many of the organizations we visited said that federal grant funding could help them better prepare for and build capacity for large-scale disasters, because they struggle to raise private donations for this purpose. Federal grants, while finite, are available to assist in capacity building, and voluntary organizations can be among those who receive federal grant funds from states and localities, according to FEMA officials. However, most of the voluntary organizations in our review have not received such funding, although they told us it would be beneficial. While there are many competing demands and priorities for such funds, clearer grant guidance could at least ensure that those making grant decisions consider voluntary organizations and VOADs as among those able to be subgrantees under these grants. Unless voluntary organizations are able to strengthen their capabilities and address planning and coordination challenges, the nation as a whole will likely be less prepared for providing mass care services during a large- scale disaster. An additional area of concern is the expected role of the Red Cross in a catastrophic disaster of a scale that invokes the federal government’s Catastrophic Incident Supplement. As the experience with responding to Katrina showed, it is important to agree on roles and responsibilities, as well as have a clear understanding of operating procedures in the event of a catastrophic disaster. However, FEMA officials said they have not yet revised or updated the Supplement, as required under the Post-Katrina Reform Act, with the result that the mass care section of the Supplement still reflects Red Cross’s previous role as primary agency for mass care, and not its current role as a support agency under ESF-6. While both FEMA and the Red Cross told us they expected the Red Cross to play a support agency role in a catastrophic event—consistent with the ESF-6— unless this understanding is confirmed in writing and incorporated into federal planning documents for responding to a catastrophic event, the nature of that understanding cannot be transparent to the many parties involved in supporting mass care. Finally, while it is too early to assess the impact of the changes in how the American Red Cross expects to coordinate with FEMA in fulfilling its responsibilities under ESF-6, its capacity to coordinate with FEMA is critical to the nation’s mass care response in large-scale disasters. As a result, the continued implementation, evolution, and effect of these changes bear watching. In our recently released report (GAO-08-823), we made three recommendations to FEMA. First, to help ensure that the Catastrophic Incident Supplement reflects the American Red Cross’s current role under ESF-6 as a support agency for mass care, we recommended that the Secretary of Homeland Security direct the Administrator of FEMA to establish a time frame for updating the mass care section of the Supplement so that it is consistent with the changes in the ESF-6 under the new Framework, and no longer requires the Red Cross to direct federal government resources. In the meantime, FEMA should develop an interim agreement with the Red Cross to document the understanding they have on the Red Cross’s role and responsibilities in a catastrophic event. Second, to more fully capture the disaster capabilities of major voluntary organizations that provide mass care services, we recommended that the Secretary of Homeland Security direct the Administrator of FEMA to take steps to better incorporate these organizations’ capabilities into assessments of mass care capabilities, such as FEMA’s GAP Analysis, and to broaden its assessment to include feeding capabilities outside of shelters. Such steps might include soliciting the input of voluntary organizations, such as through NVOAD; integrating voluntary organization data on capabilities into FEMA’s analyses; and encouraging state governments to include voluntary mass care organization data in studies. Finally, to help these voluntary organizations better prepare for providing mass care in major and catastrophic disasters, we recommended that the Secretary of Homeland Security direct the Administrator of FEMA to clarify the Homeland Security Grant Program funding guidance for states so it is clear that voluntary organizations and local VOADs are among those eligible to be subgrantees under the program. In commenting on a draft of GAO-08-823, FEMA agreed with our recommendations on establishing a time frame for updating the role of the American Red Cross in the Catastrophic Incident Supplement and clarifying federal guidance to states on potential recipients of preparedness grants. However, FEMA criticized certain aspects of our methodology, asserting that the draft did not address the role of states in coordinating mass care. As stated in our objectives, the focus of the report, by design, was on voluntary organizations’ roles and capabilities in disaster response. While focusing on voluntary organizations, the report also acknowledges the disaster response role and responsibilities of governments—local, state, and federal—under the National Response Framework. Accordingly, we interviewed local, state, and federal government emergency management officials, as described in the more detailed description of our report’s methodology. FEMA also raised concerns about whether the voluntary organizations discussed in our report provided a comprehensive picture of mass care capabilities. However, our report does not attempt to address all the services and capabilities of the voluntary sector but acknowledges that other voluntary organizations also provide mass care and other services. It also includes the caveat that we do not attempt to assess the total disaster response capabilities in any single location we visited. FEMA also disagreed with our recommendation to better incorporate voluntary organizations’ capabilities in assessments because the government cannot command and control private sector resources. However, FEMA is required under the Post-Katrina Act to establish a comprehensive assessment system to assess the nation’s prevention capabilities and overall preparedness. A comprehensive assessment of the nation’s capabilities should account as fully as possible for voluntary organizations’ capabilities in mass care. Assessing capabilities more fully does not require controlling these resources but rather cooperatively obtaining and sharing information. Without such an assessment, the government will have an incomplete picture of the mass care resources it can draw upon in large-scale disasters. In its comments, FEMA also asserted that our report incorrectly assumes that if funding was made available, it would enable voluntary organizations to shelter and care for people in catastrophic events. However, we discuss potential federal funding in relation to voluntary organizations’ preparedness and planning activities, not direct services. As noted in the report, such funding could be used to strengthen voluntary organizations’ disaster preparedness, such as coordination with FEMA, training of personnel, and developing continuity of operations plans. FEMA also provided some technical clarifications, which we incorporated as appropriate. The American Red Cross, in comments on a draft of GAO-08-823, further explained its role in providing post-evacuation sheltering under New York City’s coastal storm plan and provided technical clarifications. We added information as appropriate to further clarify the American Red Cross’s role in providing sheltering in New York City. We also provided excerpts of the draft report, as appropriate, to The Salvation Army, the Southern Baptist Convention, Catholic Charities USA, and NVOAD. The American Red Cross, The Salvation Army, and NVOAD all provided us with technical comments, which we incorporated as appropriate. Madam Chair, this concludes my remarks. I would be happy to answer any questions that you or other members of the subcommittee may have. For further information, please contact, Cynthia M. Fagnoni, Managing Director, (202) 512-7215 or fagnonic@gao.gov. Also contributing to this statement were Gale C. Harris, Deborah A. Signer, and William W. Colvin. We designed our study to provide information on (1) what the roles of major national voluntary organizations are in providing mass care and other human services in response to large-scale disasters requiring federal assistance, (2) what steps these organizations have taken since Katrina to strengthen their capacity for service delivery, (3) what is known about these organizations’ current capabilities for responding to mass care needs in such a large-scale disaster, and (4) what the remaining challenges are that confront voluntary organizations in preparing for such large-scale disasters. We focused our review on the following five major voluntary organizations based on their contributions during Hurricane Katrina and congressional interest: the American Red Cross, The Salvation Army, the Southern Baptist Convention, Catholic Charities USA, and the United Way of America. Since the United Way of America does not provide direct services in disasters, we did not include it in our analysis of recent improvements to service delivery, response capabilities, and remaining challenges. For our review of voluntary organizations’ response capabilities, we limited our focus to the three organizations in our study that provide mass care services: the Red Cross, The Salvation Army, and the Southern Baptist Convention. To obtain information for all of the objectives, we used several methodologies: we reviewed federal and voluntary organization documents; reviewed relevant laws; interviewed local, state, and federal government and voluntary agency officials; conducted site visits to four selected metropolitan areas; and collected data on the voluntary organizations’ capabilities. We reviewed governmental and voluntary organization documents to obtain information on the role of voluntary organizations, recent improvements to service delivery, response capabilities, and remaining challenges. To obtain an understanding of the federal disaster management framework, we reviewed key documents, such as the 2008 National Response Framework, the Emergency Support Function 6—Mass Care, Emergency Assistance, Housing, and Human Services Annex (ESF- 6), the 2006 Catastrophic Incident Supplement, and the 2007 National Preparedness Guidelines, which collectively describe the federal coordination of mass care and other human services. We also reviewed pertinent laws, including the Post-Katrina Emergency Management Reform Act of October 2006. In addition, we reviewed documents for each of the five voluntary organizations in our review, which describe their roles in disasters and explained their organizational response structures. These documents included mission statements, disaster response plans, and statements of understanding with government agencies and other voluntary organizations. We also reviewed key reports written by federal agencies, Congress, voluntary organizations, policy institutes, and GAO to identify lessons learned from the response to Hurricane Katrina and steps voluntary organizations have taken since then to improve service delivery. We interviewed federal government and national voluntary organization officials to obtain information on the role of voluntary organizations, recent improvements to service delivery, response capabilities, and remaining challenges. At the federal level, we interviewed officials from the Federal Emergency Management Agency (FEMA) in the ESF-6 Mass Care Unit, the FEMA Grants Office, and the Disaster Operations Directorate. We also interviewed the executive director of the National Voluntary Organizations Active in Disaster (NVOAD). We interviewed these officials regarding the role of the voluntary organizations in disaster response, grants and funding offered to voluntary organizations, voluntary organization and government logistics in disasters, assessments of capabilities, and the types of interactions each of them has with the organizations from our review. We also interviewed national voluntary organization officials from the five organizations in our review about the roles of their organizations in disaster response, improvements the organizations had made to coordination and service delivery since Hurricane Katrina, their organizations’ capabilities to respond to disasters, and what remaining challenges exist for the organizations in disaster response. We visited four metropolitan areas—Washington, D.C.; New York, New York; Miami, Florida; and Los Angeles, California—to review the roles, response structures, improvements to service delivery, response capabilities, and challenges that remain for the selected voluntary organizations’ in these local areas. We selected these metropolitan areas based on their recent experiences with disaster, such as September 11; their potential risk for large-scale disasters; and the size of their allotments through the federal Urban Areas Security Initiative grant program. The metropolitan areas that we selected also represent four of the six urban areas of the country considered most at risk for terrorism under the 2007 Urban Areas Security Initiative. During our visits to the four metropolitan areas, we interviewed officials from the five voluntary organizations, local and state government emergency management agency officials, the heads of the local Voluntary Organizations Active in Disaster (VOAD), and FEMA’s regionally based liaisons to the voluntary sector, known as voluntary agency liaisons (VAL). During our interviews, we asked about the roles and response structures of voluntary organizations in disaster response, improvements the organizations had made to coordination and service delivery since Hurricane Katrina, the organizations’ capabilities to respond to disasters, and what challenges exist for the organizations in disaster response. To review voluntary organizations’ sheltering and feeding capabilities, we collected data through interviews and written responses from the three organizations in our study that provide mass care: the Red Cross, The Salvation Army, and the Southern Baptist Convention. By capabilities we mean the means to accomplish a mission or function under specified conditions to target levels of performance, as defined in the federal government’s National Preparedness Guidelines. We collected data on both their nationwide capabilities and their locally based capabilities in each of the four metropolitan areas we visited. To obtain capabilities data in a uniform manner, we requested written responses to questions about sheltering and feeding capabilities from these organizations in the localities we visited, and in many of these responses, voluntary organizations described how they derived their data. For example, to collect data on feeding capabilities, we asked voluntary organization officials how many mobile kitchens they have and how many meals per day they are capable of providing. To assess the reliability of the capability data provided by the voluntary organizations, we reviewed relevant documents and interviewed officials knowledgeable about the data. However, we did not directly test the reliability of these data because the gaps between capabilities and estimated needs were so large that greater precision would not change this underlying finding. It was also not within the scope of our work to review the voluntary organizations’ systems of internal controls for data on their resources and capabilities. To identify potential needs for mass care services, we used available estimates for catastrophic disaster scenarios in each of the selected metropolitan areas: Washington, D.C.—terrorism; New York, New York— hurricane; Miami, Florida—hurricane; and Los Angeles, California— earthquake. We reviewed federal, state, and Red Cross estimates of sheltering and feeding needs resulting from these potential catastrophic disasters: Federal catastrophic estimates—We reviewed the earthquake estimates from the Target Capabilities List that were developed by the Department of Homeland Security (DHS) after an in-depth analysis of the Major Earthquake scenario in the National Planning Scenarios. The National Planning Scenarios were developed by the Homeland Security Council–-in partnership with the Department of Homeland Security, other federal departments and agencies, and state and local homeland security agencies. The scenario assumes a 7.2 magnitude earthquake with a subsequent 8.0 earthquake occurs along a fault zone in a major metropolitan area with a population of approximately 10 million people, which is approximately the population of Los Angeles County. State catastrophic estimates—We reviewed catastrophic hurricane estimates from the Florida Division of Emergency Management’s Hurricane Ono planning project. The project assumes a Category V hurricane making landfall in South Florida, which has a population of nearly 7 million people. Red Cross catastrophic estimates—We reviewed catastrophic estimates from the Red Cross’s risk-based capacity building initiative. To develop these estimates, the Red Cross worked with state and local officials and other disaster experts to develop “worst case” disaster scenarios in six high-risk areas of the country, including the four metropolitan areas in our study. The scenarios for these four metropolitan areas were: a 7.2 to 7.5 magnitude earthquake in Southern California; a chemical, biological, radiological, nuclear, or major explosion terrorist attack in the Washington, D.C. region; a Category III/IV hurricane in the New York metropolitan area; and a Category V hurricane in the Gulf Coast. To identify general findings about nationwide preparedness, we compared the capabilities data provided by the voluntary organizations to these catastrophic disaster estimates. We did not attempt to assess the total disaster response capabilities in any single location that we visited or the efficacy of any responses to particular scenarios, such as major earthquakes versus hurricanes. We conducted this performance audit from August 2007 to September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Voluntary Organizations: FEMA Should More Fully Assess Organizations’ Mass Care Capabilities and Update the Red Cross Role in Catastrophic Events. GAO-08-823. Washington, D.C.: September 18, 2008. Emergency Management: Observations on DHS’s Preparedness for Catastrophic Disasters. GAO-08-868T. Washington, D.C.: July 11, 2008. Homeland Security: DHS Improved its Risk-Based Grant Programs’ Allocation and Management Methods, but Measuring Programs’ Impact on National Capabilities Remains a Challenge. GAO-08-488T. Washington, D.C.: March 11, 2008. National Disaster Response: FEMA Should Take Action to Improve Capacity and Coordination between Government and Voluntary Sectors. GAO-08-369. Washington, D.C: February 27, 2008. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-1142T. Washington, D.C.: July 31, 2007. Emergency Management: Most School Districts Have Developed Emergency Management Plans, but Would Benefit from Additional Federal Guidance. GAO-07-609. Washington, D.C.: June 12, 2007. Homeland Security: Preparing for and Responding to Disasters. GAO-07-395T. Washington, D.C.: March 9, 2007. Disaster Assistance: Better Planning Needed for Housing Victims of Catastrophic Disasters. GAO-07-88. February 2007. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery Systems. GAO-06-618. Washington, D.C.: September 2006. Hurricanes Katrina and Rita: Coordination between FEMA and the Red Cross Should Be Improved for the 2006 Hurricane Season. GAO-06-712. June 8, 2006. Homeland Security Assistance for Nonprofits: Department of Homeland Security Delegated Selection of Nonprofits to Selected States and States Used a Variety of Approaches to Determine Awards. GAO-06-663R. Washington, D.C.: May 23, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Emergency Preparedness and Response: Some Issues and Challenges Associated with Major Emergency Incidents. GAO-06-467T. Washington, D.C.: February 23, 2006. Statement by Comptroller General David M. Walker on GAO’s Preliminary Observations Regarding Preparedness and Response to Hurricanes Katrina and Rita. GAO-06-365R. Washington, D.C.: February 1, 2006. Hurricanes Katrina and Rita: Provision of Charitable Assistance. GAO-06-297T. Washington, D.C.: December 13, 2005. September 11: More Effective Collaboration Could Enhance Charitable Organizations’ Contributions in Disasters. GAO-03-259.Washington, D.C.: December 19, 2002.
Voluntary organizations have traditionally played a major role in the nation's response to disasters, but the response to Hurricane Katrina raised concerns about their ability to handle large-scale disasters. This testimony examines (1) the roles of five voluntary organizations in providing mass care and other services, (2) the steps they have taken to improve service delivery, (3) their current capabilities for responding to mass care needs, and (4) the challenges they face in preparing for large-scale disasters. This testimony is based on GAO's previous report (GAO-08-823) that reviewed the American Red Cross, The Salvation Army, the Southern Baptist Convention, Catholic Charities USA, and United Way of America; interviewed officials from these organizations and the Federal Emergency Management Agency (FEMA); reviewed data and laws; and visited four high-risk metro areas--Los Angeles, Miami, New York, and Washington, D.C. The five voluntary organizations we reviewed are highly diverse in their focus and response structures. They also constitute a major source of the nation's mass care and related disaster services and are integrated into the 2008 National Response Framework. The Red Cross in particular--the only one whose core mission is disaster response--has a federally designated support role to government under the mass care provision of this Framework. While the Red Cross no longer serves as the primary agency for coordinating government mass care services--as under the earlier 2004 National Plan--it is expected to support FEMA by providing staff and expertise, among other things. FEMA and the Red Cross agree on the Red Cross's role in a catastrophic disaster, but it is not clearly documented. While FEMA recognized the need to update the 2006 Catastrophic Incident Supplement to conform with the Framework, it does not yet have a time frame for doing so. Since Katrina, the organizations we studied have taken steps to strengthen their service delivery by expanding coverage and upgrading their logistical and communications systems. The Red Cross, in particular, is realigning its regional chapters to better support its local chapters and improve efficiency and establishing new partnerships with local community-based organizations. Most recently, however, a budget shortfall has prompted the organization to reduce staff and alter its approach to supporting FEMA and state emergency management agencies. While Red Cross officials maintain that these changes will not affect improvements to its mass care service infrastructure, it has also recently requested federal funding for its governmental responsibilities. Capabilities assessments are preliminary, but current evidence suggests that in a worst-case large-scale disaster, the projected need for mass care services would far exceed the capabilities of these voluntary organizations without government and other assistance--despite voluntary organizations' substantial resources locally and nationally. Voluntary organizations also faced shortages in trained volunteers, as well as other limitations that affected their mass care capabilities. Meanwhile, FEMA's initial assessment does not necessarily include the sheltering capabilities of many voluntary organizations and does not yet address feeding capabilities outside of shelters. In addition, the ability to assess mass care capabilities and coordinate in disasters is currently hindered by a lack of standard terminology and measures for mass care resources, and efforts are under way to develop such standards. Finding and training more personnel, dedicating more resources to preparedness, and working more closely with local governments are ongoing challenges for voluntary organizations. A shortage of staff and volunteers was most commonly cited, but we also found they had difficulty seeking and dedicating funds for preparedness, in part because of competing priorities. However, the guidance for FEMA preparedness grants to states and localities was also not sufficiently explicit with regard to using such funds to support the efforts of voluntary organizations.
To enable DOD to close unneeded bases and realign others, Congress enacted legislation that instituted BRAC rounds in 1988, 1991, 1993, and 1995. A special commission established for the 1988 round made realignment and closure recommendations to the Senate and House Committees on Armed Services. For the 1991, 1993, and 1995 rounds, special BRAC Commissions were set up, as required by legislation, to make specific recommendations to the President, who in turn sent the commissions’ recommendations and his approval to Congress. The four commissions generated 499 recommendations—97 major closures and hundreds of smaller base realignments, closures, and other actions. Of the 499 recommendations, 451 required action; the other 48 were modified in some way by a later commission. DOD was required to complete BRAC realignment and closure actions for the 1988 round by September 30, 1995, and for the 1991, 1993, and 1995 rounds within 6 years from the date the President forwarded the recommended actions to Congress. DOD reported that as of September 30, 2001, it had taken all necessary actions to implement the recommendations of the BRAC Commissions for the four rounds. As a result, DOD estimated that it had reduced its domestic infrastructure by about 20 percent. While DOD has closed or realigned bases as recommended by the various BRAC Commissions, other actions, such as the cleanup of environmentally contaminated property and the subsequent transfer of unneeded property to other users, were allowed to continue beyond the 6-year implementation period for each round. Once DOD no longer needs BRAC property, the property is considered excess and is offered to other federal agencies. As shown in figure 1, any property that is not taken by other federal agencies is then considered surplus and is disposed of through a variety of means to state and local governments, local redevelopment authorities, or private parties. The various methods as noted in figure 1 to convey unneeded property to parties external to the U.S. government are targeted, in many cases, to a particular end-use for the property. For example, under a public benefit conveyance, state and local governments and local redevelopment authorities acquire surplus DOD property for such purposes as schools, parks, and airports for little or no cost. Under an economic development conveyance, property is transferred for uses that promote economic recovery and job creation. Conservation conveyances, which were recently introduced in the Bob Stump National Defense Authorization Act for Fiscal Year 2003, provide for the transfer of property to qualified not- for-profit groups for natural resource and conservation purposes. Property can, in other cases, also be conveyed to nonfederal parties through the other cited methods as shown in figure 1 without regard, in many cases, to a particular end-use. Property can, for example, be sold or special congressional legislation can dictate transfer to a particular entity. In the early years of BRAC, DOD was projecting higher revenue from land sales than it subsequently experienced. DOD had originally projected about $4.7 billion in revenue from such sales for the four closure rounds; however, according to the fiscal year 2005 budget, total land sales and related revenue were about $595 million for those rounds. The decrease in expected sales is attributable primarily to national policy changes and legislation that emphasize assisting communities that are losing bases. Nonetheless, in recent years the Navy has expressed a renewed interest in the sale of BRAC property with the sale of some unneeded property at the former Tustin Marine Corps Air Station in California for $208.5 million. Moreover, the Navy has also indicated that it intends to sell portions of the former Naval Station Roosevelt Roads in Puerto Rico. To what extent sales will play more of a role in disposing of unneeded property arising from the 2005 BRAC round remains to be seen. Reducing excess infrastructure and generating savings for the department were the key reasons for conducting the prior BRAC rounds. The net savings for implementing BRAC actions are arrived at by deducting the costs necessary to implement those actions from the estimated savings generated by the resulting reduction in excess infrastructure. These savings are most often cost avoidances—costs that DOD might have incurred if BRAC actions had not taken place. Some of the savings are one-time (e.g., canceled military construction projects), but most often represent an avoidance of recurring spending (e.g., personnel reductions). In this respect, eliminating or reducing recurring base support (e.g., physical security, fire protection, utilities, property maintenance, accounting, payroll, and a variety of other services) costs at closed and realigned bases is a major component of BRAC savings. The value of these recurring savings has become the largest and most important portion of BRAC’s overall estimated savings. DOD must comply with cleanup standards and processes under applicable laws, regulations, and executive orders in conducting assessments and cleanup of its unneeded base property. The time needed to accomplish cleanup activities can extend many years beyond the 6 years allowed under BRAC legislation for ceasing military operations and closing bases. The status of cleanup efforts can also affect the transfer of title from DOD to other users. The Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) provides the framework for responding to most contamination problems resulting from hazardous waste disposal practices, leaks, spills, or other activity that has created a public health or environmental risk. DOD performs its cleanups in coordination with regulatory agencies and, as appropriate, with other potentially responsible parties, including current property owners. While CERCLA had originally authorized property transfers only after cleanup actions had been taken, the act was amended in 1996 to expedite transfer of contaminated property under certain conditions under a so-called early transfer authority. While use of this authority does allow for the possible concurrent cleanup and reuse of the property, the requirement remains that contaminated sites must be cleaned up to ensure that transferred BRAC property is not harmful to human health or the environment and that it can support new use. We have reported on base closure issues from the prior BRAC rounds on several occasions (see app. VI). Although some of our reports have focused on concerns about implementation actions at a specific location, in December 1998 and April 2002 we issued two broader BRAC status reports addressing DOD-wide closure issues. These reports discussed the magnitude and precision of cost and savings estimates, the progress of environmental cleanup and property transfer, and the impact on communities and their recovery. We also issued reports in July and August 2001 that updated closure-related implementation data and reaffirmed the primary results of our prior work. A brief summary of these reports is as follows: In our December 1998 report, we concluded that BRAC actions were on track. Cost and savings estimates were substantial but not precise because the services had not routinely updated their savings estimates, as they had their cost estimates. Environmental cleanup was progressing, but it was costly and time consuming. Property disposal was progressing slowly because of factors that were not completely under DOD’s control and that were difficult to manage, such as identifying recipients for the property and associated transfer planning and addressing environmental concerns. Most communities where bases had closed were recovering, and a majority was faring well economically relative to key national economic indicators. In our July 2001 report, we concluded that estimated BRAC net savings had reportedly increased to $15.5 billion from the $14 billion we reported in our December 1998 report. Accumulated savings began to surpass accumulated costs in fiscal year 1998. We observed that BRAC savings were real and substantial, but limitations existed in DOD’s effort to track costs and savings that affect the precision of its estimates. In our August 2001 report, we concluded that BRAC closing and realignment actions were essentially completed, but the subsequent transfer of unneeded base property was only partially completed. Environmental cleanup was progressing but would require many years to fully complete. Most communities were recovering from the economic impacts of base closures because of several factors, such as a strong national or regional economy and federal assistance programs. In our April 2002 report, we concluded that most (about 58 percent) former unneeded base property had not yet been transferred to other users, the closure process was generating substantial savings (about $16.7 billion, although the savings estimates were imprecise), the total expected environmental cleanup costs were still within range of the cost estimates made in 1996, and most communities surrounding closed bases were faring well economically in relation to key national economic indicators. As of September 30, 2004, nearly 72 percent (364,000 acres) of the approximately 504,000 acres of unneeded BRAC property from the prior rounds had been transferred to other federal or nonfederal entities. When leased land is added to this acreage, the amount of unneeded BRAC property that is in reuse increases to 90 percent. The remaining untransferred property (140,000 acres) has not been transferred primarily because of environmental cleanup issues. DOD has used and continues to use several methods to transfer property and expedite its reuse. Of the approximately 504,000 unneeded acres available for disposal external to DOD, 72 percent had been transferred to either federal or nonfederal entities, while 28 percent, including leased acreage, remains in DOD’s inventory. DOD has made progress in transferring property in the aggregate since our 2002 report, having increased the transfer rate from 42 percent to 72 percent (see fig. 2). The transfers of property at the Naval Air Facility in Adak, Alaska, and the Sierra Army Depot, California, are the largest transfers since our April 2002 report, accounting for a combined total of nearly 129,000 acres. A breakdown of the current status of unneeded BRAC property shows that (1) 52 percent had been transferred to nonfederal entities, (2) 20 percent had been transferred to other federal agencies, (3) 18 percent had been leased but not transferred, and (4) 10 percent was untransferred and is awaiting future disposition (see fig. 3). Even though DOD has 140,000 acres of its BRAC property remaining to be transferred, much of this land is in long-term lease with other users. Altogether, the services have nearly 91,000 acres (65 percent) of their untransferred property under lease, leaving 49,000 acres (35 percent) that has not been transferred and not in reuse. The department expects that this property will eventually be transferred to nonfederal users. Leased property, while not transferred to the user, can afford the user and DOD some benefits. Communities, for example, can opt for leasing, while awaiting final environmental cleanup, as an interim measure to promote property reuse and job creation. And, DOD can often gain an advantage, in some cases, as the communities assume responsibility and pay for protecting and maintaining the property. By adding leased acres to the number of transferred acres, the amount of unneeded BRAC property in reuse rises to 90 percent. As we have reported in the past, environmental cleanup constraints have and continue to delay the services from rapidly transferring unneeded BRAC property. Army data show that about 82 percent of its approximate 101,000 untransferred acres has some kind of environmental impediment, such as unexploded ordnance (UXO) or some level of chemical contamination that requires cleanup before transfer can take place. Navy data show that about 65 percent of the Navy’s almost 13,000 untransferred acres could not be transferred because of environmental reasons. Likewise, about 98 percent of the Air Force’s approximately 24,000 untransferred acres is due to environmental cleanup issues. Table 1 shows those BRAC installations with untransferred acreage that had substantial estimated costs for fiscal year 2004 and beyond for completing environmental cleanup actions. The estimated completion costs for these BRAC installations account for nearly 60 percent of DOD’s future BRAC environmental cleanup estimates for the previous rounds. Further detail on environmental costs for BRAC property is included in the next section of this report. As previously discussed, DOD has several options available to expedite the transfer of its unneeded property for further reuse by other entities. The following provides a brief summary of the various methods that have been used to transfer BRAC property to nonfederal users: Public benefit conveyances: As noted earlier, this method is used to transfer property primarily to state and local governments specifically for an exclusive and protected public use, usually at little or no cost. This type of conveyance is sponsored by a federal agency that is closely aligned with its intended use. For example, the Federal Aviation Administration handles public benefit conveyances of BRAC airfields and facilities, and the National Park Service sponsors public benefit conveyances for new public parks and recreation facilities. Nearly 18 percent of the BRAC acreage transferred to nonfederal users in the prior rounds was accomplished through this method. Economic development conveyances: As noted earlier, this method is used to transfer property to local redevelopment authorities for the purpose of creating jobs and promoting economic activity within the local community. Under this transfer method, many communities could receive property at fair market value or below, and at no cost to those in rural areas. The National Defense Authorization Act for Fiscal Year 2000 required all future economic development conveyances to be no cost and permitted those currently in-force to be converted to no-cost conveyances if certain conditions were met. According to DOD and community officials, this method had gained in popularity with the adoption of the no- cost provision, which, in addition to saving money for the new user, virtually eliminated the delays resulting from prolonged negotiations over the fair market value of the property and accelerated economic development and job creation. We note, however, the National Defense Authorization Act for Fiscal Year 2002 included a provision stipulating that DOD is to seek to obtain fair market value for BRAC-related transfers of property in the upcoming 2005 round. Although the BRAC law still allows DOD to transfer properties for economic development at no cost under certain circumstances, the general requirement for the 2005 round to seek fair market value may impact the use of this method of conveyance. Nearly 32 percent of the BRAC acreage transferred to nonfederal users in the prior rounds was accomplished through economic development conveyances. Conservation conveyances: This method was used by DOD for the first time in September 2003 to transfer property for natural resource and conservation purposes. Under this method, the Army transferred almost 58,000 acres from the Sierra Army Depot, California, to the Honey Lake Conservation Team, which is made up of two nonprofit organizations—the Center for Urban Watershed Renewal and the Trust for Public Lands—and two private-sector companies. This is the largest single transfer of surplus BRAC property that the Army has undertaken. Nearly 22 percent of the BRAC acreage transferred to nonfederal users in the prior rounds was accomplished through this method. Other conveyances: Unneeded BRAC property can also be transferred through special legislation, reversion, lease termination/expiration, or sales. Congress can, through special legislation, determine the terms and conditions for transferring specific BRAC properties. For example, through special congressional legislation, the Navy transferred over 47,000 acres of its 71,000-acre Adak, Alaska, Naval Air Facility to a local redevelopment authority in March 2004 through the Department of the Interior in exchange for other land that the Navy needed. Almost 19 percent of BRAC acreage was transferred to nonfederal users through special legislation. DOD data show that only 3 percent of the nonfederal conveyances were reversions. Additionally, the termination or expiration of a lease on BRAC property for nonfederal users accounted for about 4 percent of the transfers, while negotiated and public sales accounted for only 4 percent of the property transfers. Figure 4 summarizes the acreage transfers by the various conveyance methods. In most cases, unneeded property on a BRAC base is divided into parcels and transferred in this manner according to intended reuse plans. Thus, most of the individual actual transfers are for less than 2,000 acres. However, in some cases, the amounts can be larger. For example, the transfers of Naval Air Facility Adak, Alaska (about 71,000 acres), and Sierra Army Depot, California (about 58,000 acres), are two large transfers that have occurred since our April 2002 report. Table 2 shows the transfer methods used to convey the 5 largest tracts of BRAC property for each service across the prior rounds to date. DOD has the authority to transfer unneeded BRAC property, even if all environmental cleanup actions have not been completed, through a special authority granted by Congress called early transfer authority. The authority must be used in conjunction with one of the conveyance methods, such as an economic development conveyance, authorized to transfer BRAC property. The department credits early transfer authority for allowing it to put BRAC property into reuse much faster by conveying the property through one of its transfer authorities while concurrently meeting cleanup obligations. We initially reported in 2002 that several factors were working against the widespread application of this authority, to include community adversity to taking risks, absence of ready to implement reuse plans, and lack of support from local and state regulators. Furthermore, we cited that exercising the authority might require DOD to commit more funds, in the short term, than what is available to meet environmental cleanup requirements. Regardless of when or how BRAC property is transferred, liability for cleanup in compliance with applicable federal and state regulatory requirements remains with DOD. Cleanup of property subject to the early transfer authority does not necessarily have to be conducted exclusively by DOD. DOD can share cleanup actions with the transferee, or the transferee can conduct and pay for cleanup actions. DOD can also enter into agreements with a transferee, usually a local redevelopment authority, for the privatization of cleanup efforts. In either case, the department funds the cleanup and generally retains liability for future costs associated with the discovery of additional environmental contamination associated with prior DOD activities. As the early transfer process has evolved over its short history, the use of the authority has increased. The Army has transferred almost 8,300 acres; the Navy has transferred over 9,500 acres; and the Air Force has transferred over 700 acres using early transfer authority. These figures represent more than twice the combined acreage (about 8,225 acres) that we reported in 2002 as being transferred under this authority. According to DOD financial data, the four prior BRAC rounds generated an estimated $28.9 billion in net savings through fiscal year 2003. Moreover, DOD expects to accrue additional annual recurring savings or cost avoidances of about $7 billion in fiscal year 2004 and thereafter. As we have previously reported, however, the cost and savings projections that DOD uses to estimate net savings are imprecise because the military services have not regularly updated their savings projections and DOD’s accounting systems do not track estimated savings. Moreover, DOD has not incorporated all base closure-related costs in its estimates, thus tending to overestimate savings. On the other hand, the estimated net savings could be greater than DOD has reported because some costs attributed to the closures, such as environmental cleanup, may have occurred even if the bases remained open. DOD has a legal obligation to conduct environmental cleanup irrespective of closing or realigning an installation. Our analysis of DOD data shows that the department had accrued an estimated $28.9 billion in net savings or cost avoidances through fiscal year 2003 for the four prior BRAC rounds. This amount, which includes costs and estimated recurring savings from fiscal years 2002 and 2003, represents an increase over the $16.7 billion in net savings accrued as of fiscal year 2001 that we cited in our 2002 report. In calculating net savings, DOD deducts the costs of implementing BRAC actions for the four closure rounds from the estimated savings. As figure 5 shows, the cumulative estimated savings surpassed the cumulative costs to implement BRAC actions in 1998, and the net savings have grown and will continue to grow from that point, even though some costs (e.g., environmental cleanup) have been incurred after that time and some costs will continue well beyond 2003. Our analysis shows that the rate of net savings accumulation increased because the cumulative BRAC costs flattened out just before the 6-year implementation period for the last round ending in fiscal year 2001. Most expenses associated with closures and realignments were incurred through fiscal year 2001; most of the expenses beyond fiscal year 2001 were primarily for environmental cleanup. Through fiscal year 2003, the cumulative costs to implement the four prior round actions amounted to about $23.3 billion (see fig. 5). As shown in figure 6, approximately one-third ($7.8 billion) of this amount was spent for operations and maintenance, such as the maintenance and repair to keep facilities and equipment in good working order, as well as civilian severance and relocation costs. A little more than one-third ($8.3 billion) was spent on environmental cleanup and compliance activities, for example, to reduce, remove, and recycle hazardous wastes and remove unsafe buildings and debris from closed bases. Finally, a little less than one-third ($6.7 billion) was used for military construction, including renovating existing facilities and constructing new buildings at military bases that were not closed to accommodate relocating military units and various functions. According to DOD data, BRAC cumulative savings or cost avoidances will rise steadily for an indefinite period as BRAC actions are completed. As figure 7 shows, DOD estimates that it accrued BRAC savings of $52.2 billion through fiscal year 2003 as a result of eliminating or reducing operation and maintenance costs, including base support costs, and eliminating or reducing military and civilian personnel costs. Of this amount, about half ($26.8 billion) can be attributed to savings from operation and maintenance activities, such as terminating or reducing physical security, fire protection, utilities, property maintenance, accounting, civilian payroll, and a variety of other services that have associated costs. An additional $14.7 billion in estimated savings resulted from military personnel reductions. Moreover, DOD expects to accrue an estimated $7 billion in annual recurring savings in fiscal year 2004 and beyond for the four BRAC rounds. This amount represents an increase of approximately $486 million from our prior reporting in 2002 and is attributable to inflation over that time period. The savings and cost estimates used by DOD to calculate the net savings at its BRAC-affected bases are imprecise, primarily because the military services have not periodically updated their savings estimates and DOD does not include all costs associated with BRAC closures in its estimates. Further, net savings may be larger than DOD estimates because some environmental and construction costs associated with ongoing environmental and facility recapitalization programs at BRAC-affected bases would have at least partially offset future costs at those locations if they were not closed or realigned. The results of our prior work showed that the military services, despite DOD guidance that directs them to update savings estimates in their annual budget submissions, had not periodically updated these estimates, thereby contributing to imprecision in overall BRAC estimated net savings figures. Moreover, a fundamental limitation exists in DOD’s accounting systems, which, like other accounting systems, are not oriented toward identifying and tracking savings. Other reasons cited by service officials are that updating savings has not been a high priority and that it is a labor- intensive process that could be costly. Nonetheless, the periodic updating of estimates is important, especially in view of the upcoming 2005 BRAC round, in order to increase their accuracy for DOD and congressional decision makers. As early as 1998, DOD reported it had plans to improve its savings estimates for the implementation of future BRAC rounds. In addition, in our April 2002 report, we recommended that DOD develop a DOD-wide systematic approach for the periodic updating of initial closure savings estimates, along with an oversight mechanism to ensure these updates are accomplished for the upcoming 2005 BRAC round. We continue to believe this recommendation remains valid. DOD has not yet acted on our recommendation, but DOD officials told us that they intend to implement a system to better track savings for implementing the upcoming round actions. Prior BRAC round costs are not comprehensive because they do not include certain costs related to BRAC activities that are incurred either by DOD or by other governmental agencies. For example, DOD’s calculation of one-time estimated net savings does not include BRAC-related economic assistance costs, most of which are incurred by federal agencies other than DOD. As of September 30, 2004, federal agencies reported that they had spent about $1.9 billion (an increase from the $1.5 billion in our 2002 report) to assist BRAC-affected communities and individuals for such purposes as base reuse planning, airport planning, job training, infrastructure improvements, and community economic development. These activities include the following: About $611 million was provided by the Department of Commerce’s Economic Development Administration to assist communities with infrastructure improvements, building demolition, and revolving fund loans. About $760 million was provided by the Federal Aviation Administration to assist with converting former military airfields to civilian use. About $223 million was provided by the Department of Labor to help communities retrain workers who lost their jobs. The Department of Labor has not provided additional funding since we last reported in 2002. About $280 million was provided by DOD’s Office of Economic Adjustment to help communities plan and implement the reuse of BRAC bases. While these costs represent a relatively small percentage (about 7 percent) of the overall net savings estimate through 2003, it does demonstrate the imprecision of the overall BRAC savings estimate. However, our analysis of DOD and other federal agencies’ data shows that this percentage will most likely diminish over time as the net savings continue to grow. While the noninclusion of certain costs, as noted above, has the tendency of overstating savings or cost avoidances, DOD’s difficulty in providing precise estimates is further complicated by the fact that some BRAC actions could produce savings that are not captured in its net savings estimates. For example, the inclusion of BRAC environmental cleanup costs in calculating net savings has the effect of overstating costs and understating net savings for DOD because the department has a legal obligation to conduct environmental cleanup irrespective of closing or realigning an installation. A similar case can be made for military construction projects in the BRAC program. While DOD had expended significant BRAC funds (about $6.7 billion through fiscal year 2003) on military construction at its receiving bases, it would have likely incurred many of these costs over time under its facilities capital improvement initiatives if the closing bases had remained open. Our analyses of DOD data show that although environmental cleanup cost estimates at BRAC sites are within the range of prior projections, they may fluctuate because of unknown or undetermined future environmental cleanup responsibilities or improved cleanup techniques. DOD expected to spend an estimated $3.6 billion in fiscal year 2004 and beyond to complete environmental cleanup on BRAC properties, bringing the total BRAC environmental costs to $11.9 billion, which is still within prior estimates. The estimates of future projected liabilities have decreased since last year as a result of reported focused management oversight and review of restoration costs and schedules, completion of more cleanup actions, and reevaluation of some sites. However, the estimated liabilities may change due to unforeseen or undetermined environmental liabilities, such as the discovery of additional UXO or contaminants, which may exist on BRAC properties. Moreover, revisions to cleanup standards or the intended reuse of the land not yet transferred could prompt the need to change cleanup requirements, which would in turn affect costs. Our analysis shows that the total estimated environmental cleanup cost of about $11.9 billion for the prior BRAC rounds is within the range of prior program estimates. The cost estimate is slightly higher than DOD’s previous estimate of $10.5 billion in 2002 and $11.3 billion in 1996. DOD had obligated approximately $8.3 billion in BRAC environmental cleanup and compliance costs through fiscal year 2003, and it estimates that future costs for fiscal year 2004 and beyond will now amount to $3.6 billion. The $3.6 billion estimate for future BRAC environmental liabilities is about $1 billion less than DOD had previously projected for fiscal year 2003 and beyond. The decrease is attributable primarily to about $761 million that DOD spent on environmental cleanup and compliance in fiscal year 2003 and to a number of actions taken by the services. For example, the Air Force reportedly applied more focused management oversight and review of estimated restoration costs and schedules to the Air Force Restoration Information Management System, accounting for a $174.7 million decrease; the Navy reduced its estimates based largely on conservative project execution rates, accounting for a $137.4 million decrease; and the Army recharacterized some of its cleanup sites, accounting for a $56.5 million reduction. However, DOD acknowledged in its 2003 Performance and Accountability Report that the total future environmental liability estimates for remaining BRAC sites may need to be adjusted because the DOD Inspector General questioned the reliability of DOD environmental cost estimates, primarily citing incidents of a lack of supporting documentation for the estimates and incomplete audit trails. Estimating the costs of future environmental cleanup on BRAC properties is complicated by the possibility that these properties might contain unknown or emerging environmental hazards, which could change cleanup costs. For example, costs could change as the result of the discovery of additional UXO or of previously unregulated chemical contaminants or waste in the ground or groundwater. Estimates of future liabilities may also change if certain federal environmental standards change, the intended use of yet-to-be-transferred BRAC property is revised, or cleanup techniques are improved. As of the end of fiscal year 2003, DOD stated that about 78 percent of cleanup activities on BRAC sites with identified hazardous waste were reportedly complete and met the CERCLA standards. However, there are questions about the extent of additional potential cleanup costs associated with UXO and perchlorate contamination on various DOD sites, including BRAC installations. The following provides an update on DOD’s activities concerning these particular hazards: UXO: While clearing BRAC property of UXO for further reuse has presented a difficult and costly challenge for the department, DOD is making progress through its Military Munitions Response Program. This program is designed to address UXO hazards not only on BRAC property but all DOD property, with the exception of operational ranges. Through fiscal year 2003, the department had addressed UXO problems on 148 of the 196 BRAC sites (76 percent) on 32 BRAC installations where UXO was identified. It completed UXO cleanup on 126 of the total sites (64 percent), and it is currently working on the other 22 sites that were addressed. While all sites were identified prior to fiscal year 2001, DOD had not yet completed establishing program goals or developing metrics to track projects, assess risks, and prioritize the remaining cleanup sites. The Navy estimates that its BRAC UXO cleanup costs for fiscal year 2004 and beyond will be about $32.3 million and will involve 2,353 acres. Similarly, the Army estimates that its remaining UXO cleanup costs will approach $496 million on 21,000 acres, with the largest costs (about $266 million on 4,500 acres) forecasted at the former Fort Ord base in California. The Air Force estimates that it will spend nearly $2.3 million on UXO cleanup costs affecting 180 BRAC acres, of which $2 million will likely be spent on the cleanup of the former Carswell Air Force Base, Texas. Perchlorate: Perchlorate is a chemical munitions constituent that is present on some BRAC bases and which may cause adverse health effects by contaminating drinking water. Health experts have not conclusively determined what amount of perchlorate poses a health risk for humans, and no federal standard exists for allowable levels of perchlorate in drinking water. Nonetheless, the existence of perchlorate does pose a potential future liability for DOD, but that liability would depend on the standard that may be set in the future as well as the extent of its presence on BRAC installations and the intended reuse of the property. However, it should be noted that this issue could affect open as well as closing bases. In September 2003, DOD required the military components to assess the extent of perchlorate occurrence at active and closed installations and at its formerly used defense sites. In addition, DOD invested $27 million to conduct research on the potential health effects, environmental impacts, and treatment processes for perchlorate. In a report directed by Congress, DOD was required to identify the sources of perchlorate on BRAC properties and describe its plans to clean up perchlorate contamination on these sites. DOD officials stated that they assessed 14 sites, which did not include any BRAC property already transferred or deeded to other entities. The department issued its assessment in July 2004 and concluded that while it had adopted a perchlorate sampling policy that includes untransferred BRAC properties, DOD stated it will commit to integrating perchlorate remediation into its cleanup program once a regulatory standard is established. Most communities have recovered or are recovering from the impact of base closures, with more mixed results recently, allowing for some negative impact from the national economic downturn of recent years. DOD data indicate that the percentage of local DOD civilian jobs that were lost at the bases and have been replaced by reuse has increased since our 2002 report. Moreover, recent economic data show that affected BRAC communities are faring well when compared to national economic indicators. Although the average unemployment rate increased for most of the 62 BRAC communities we reviewed in 2002, nearly 70 percent had unemployment rates lower than the national average. In addition, 48 percent of communities had annual real per capital income growth rates above the U.S. average, as compared with the 53 percent stated in our last report. The growth rate declined for 74 percent of all BRAC communities as compared to our 2002 report. As we have reported in the past, the recovery process has not necessarily been easy with the strength of the national, regional, and local economies having a significant bearing on the recovery of any particular community facing a BRAC closure. The redevelopment of base property is widely viewed as an important component of economic recovery for BRAC-affected communities. While not the only determinant of economic recovery for surrounding communities, it can, nevertheless, be an important catalyst for recovery efforts. The closure or realignment of military bases creates job losses at these facilities, but subsequent redevelopment of the former bases’ property provides opportunities for creating new jobs. As DOD last reported, as of October 31, 2003, almost 72 percent (92,921) of the 129,649 DOD civilian jobs lost on military bases as a result of realignments or closures in the prior BRAC rounds had been replaced at these locations. This is 10 percent higher than the 62 percent (79,740) we reported in 2002 and over time, the number of jobs created will likely increase as additional redevelopment occurs. See appendix II for a detailed listing of jobs lost and created at major BRAC locations during the prior four rounds. Unemployment rates in BRAC-affected communities continue to compare favorably with the national average. Since 1997 (after completion of the implementation periods for the first two rounds in 1988 and 1991) and through the implementation periods of the last two rounds (1993 and 1995), about 70 percent of the 62 BRAC-affected communities have consistently been at or below the national unemployment rate (see fig. 8). According to our analysis of the annual unemployment rates for the 7-month period ending July 31, 2004, most of the 62 BRAC-affected communities compared favorably with the national average and were consistent with the results we reported in 2002. During this period, 43 of the 62 communities (69 percent) affected by base closures had unemployment rates at or below the average 7-month national rate of 5.8 percent. This is one less community than in our 2002 report when 44 communities (71 percent) had average unemployment rates lower than the (then) average 9-month national rate of 4.6 percent. For all BRAC communities with higher-than-average calendar year 2004 unemployment rates through July 2004, four had double-digit rates: Merced County, California (Castle Air Force Base), 15.8 percent; Mississippi County, Arkansas (Eaker Air Force Base), 13.0 percent; Salinas, California (Fort Ord Army Base), 11.1 percent; and Iosco County, Michigan (Wurtsmith Air Force Base), 10.2 percent. Salinas, California, is the one addition to the other three communities that we also cited in our 2002 report for having double-digit unemployment rates. Appendix III provides additional detail on the average unemployment rates for the 62 communities. Annual real per capita income growth rates for BRAC-affected communities exhibit mixed results. The latest available data (1999-2001 time frame) show that 30 (48 percent) of the 62 communities we studied had an estimated average real per capita income growth rate that was above the national average of 2.2 percent. This is a decline from our 2002 report in which 33 communities (53 percent) matched or exceeded the national rate of 3.03 percent during the 1996-1999 time frame. Additionally, our current analysis shows that of the 32 communities below the national average, 6 communities (10 percent) had average annual per capita income growth rates that were close to the national average (defined as within 10 percent), while the remaining 26 communities (42 percent) were below the national average growth rate. Forty-six (74 percent) of the 62 communities had lower per capita income growth rates than when we last reported on them in 2002. Three communities—Merced, California (Castle Air Force Base); Austin-San Marcos, Texas (Bergstrom Air Force Base); and Carroll County, Illinois (Savanna Army Depot)—had negative growth rates. By comparison, our 2002 report showed that no communities experienced a negative growth rate. Appendix IV provides additional detail on the average annual real per capita income growth rates for the 62 communities. As DOD prepares to undertake another round of base realignments and closures in 2005, we note that the department has made progress in completing postrealignment and closure actions from the prior four rounds since our last update in 2002. Seventy-two percent of former base property has been transferred and about 90 percent is in reuse if leased property is considered. And, as reported in the past, environmental cleanup requirements present the primary challenge to transferring the remaining property. Although we are making no recommendations in this report, we believe that our April 2002 report recommendation underscoring the need for a DOD-wide systematic approach for the periodic updating of savings estimates, along with an oversight mechanism to ensure these updates are accomplished for the 2005 BRAC round recommendations, remains valid. More specifically, we recommended that the Under Secretary of Defense for Acquisition, Technology, and Logistics, in consultation with the Under Secretary of Defense (Comptroller and Chief Financial Officer), develop (1) a DOD-wide systematic approach for the periodic updating of initial closure savings estimates and (2) an oversight mechanism to ensure that the military services and components update such estimates in accordance with the prescribed approach. While DOD has stated its intent to do so, it has not acted on this recommendation. The Deputy Under Secretary of Defense (Installations and Environment) provided technical comments on a draft of this report that were incorporated as appropriate. DOD concurred with the need to improve the department’s procedures for accounting for savings from the 2005 BRAC round, as we had previously recommended in our April 2002 report. DOD’s comments are included in this report as appendix V. We are sending copies of this report to interested congressional committees; the Secretaries of Defense, Army, Navy, and Air Force; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-8412, or my Assistant Director, James Reifsnyder, at (202) 512-4166 if you or your staff have any questions concerning this report. Key reports related to base closure implementation issues are listed in appendix VI. Staff acknowledgements are provided in appendix VII. To assess the reliability of data received from the Department of Defense (DOD), Department of Commerce, Department of Labor, and other federal agencies and used in this report, we reviewed available Inspector General and internal audit reports, internal reviews and studies, and contractor and consultant studies related to these databases. We also reviewed available reports of congressional hearings or copies of congressional testimony related to the data and summaries of ongoing or planned audits, reviews, and studies of the systems or the data and requested documentation related to quality practices inherent in the data systems, such as edit checks, data entry verification, and exception reports. Finally, we interviewed department and agency officials knowledgeable about their information systems to assess the reliability of those systems and the data they provide. Based on these steps and the steps discussed in the following paragraphs, we determined the data to be sufficiently reliable for the purposes of this report. To determine DOD’s progress in transferring unneeded base property to other users, we reviewed base realignment and closure (BRAC) property disposition plans and actual property transfers as of September 30, 2004, and compared them with similar data presented in our April 2002 report. We discussed property transfer reporting systems with each service to validate the reliability of the data reported to DOD. We also categorized the property disposition data into the various transfer methods (e.g., economic development conveyances) used to gain a sense of the predominant method being used. With regard to the untransferred acreage, we determined the primary impediments to property transfers by examining data for those former bases where unneeded BRAC property had not yet been transferred as of September 30, 2004. We also collected data and obtained the military services’ views on the use of the so-called early transfer authority in which property can be transferred under certain conditions before an environmental cleanup remedy is in place. Furthermore, we collected and analyzed data on the use of no-cost economic development conveyances to transfer property and stimulate its reuse. Finally, because leasing is often used as an interim measure to make property available to users while awaiting property transfer, we collected and analyzed data related to leased property. To determine the magnitude of the net savings from the four prior BRAC rounds, we reviewed DOD’s annual BRAC budget submissions and interviewed BRAC and financial officials from the services and the Office of the Secretary of Defense. To ascertain the extent to which cost and savings estimates have changed over time, we compared the data contained in DOD’s fiscal year 2005 BRAC budget submission and related documentation with similar data in DOD’s fiscal year 2002 submission, which was the latest budget documentation available when we produced our last update report in April 2002. Through this comparison, we identified where major changes had occurred in the various cost and savings categories within the BRAC account and interviewed DOD officials regarding the rationale for the changes. To gain a sense of the accuracy of the cost and savings estimates, we relied primarily on our prior BRAC reports and reviewed reports issued by the Congressional Budget Office, DOD, DOD Inspector General, and service audit agencies. We also reviewed the annual military service budget submissions for fiscal years 2002 through 2005 to determine how frequently changes were made to the cost and savings estimates. In assessing the completeness of the cost and savings data, we reviewed the component elements considered by DOD in formulating overall BRAC cost and savings estimates. Because DOD did not include in its estimates federal expenditures to provide economic assistance for communities and individuals affected by BRAC, we collected these data from the Department of Labor, the Federal Aviation Administration, the Department of Commerce (Economic Development Administration), and DOD’s Office of Economic Adjustment. Also, we reviewed the cost estimates for environmental cleanup activities beyond fiscal year 2003 because they had the effect of reducing the expected annual recurring savings for the four rounds. To assess the economic recovery of communities affected by the BRAC process, we assessed the same communities that we analyzed in our April 2002 report where more than 300 civilian jobs on military bases were eliminated during the prior rounds. We used unemployment and real per capital income growth rates as measures to analyze changes in the economic condition of communities over time and in relation to national averages. We used unemployment and real per capita income as key performance indicators because (1) DOD used these measures in its community economic impact analysis during the BRAC location selection process and (2) economists commonly use these measures in assessing the economic health of an area over time. While our assessment provides an overall picture of how these communities compare with the national averages, it does not necessarily isolate the condition, or the changes in that condition, that may be attributed to a specific BRAC action. We performed our review from November 2003 through October 2004 in accordance with generally accepted government auditing standards. The closure or realignment of military bases creates job losses at these facilities, but subsequent redevelopment of the former bases’ property provides opportunities for creating new jobs. The data presented in table 3 include civilian jobs lost and created at major base realignments and closures during the prior four BRAC rounds, as of October 31, 2003. The data do not include the job losses that may have occurred elsewhere in a community, nor do they capture jobs created from other economic activity in the area. As figure 9 shows, 18 (75 percent) of the 24 BRAC-affected localities situated west of the Mississippi River had unemployment rates equal to or less than the U.S. average rate of 5.8 percent during January through July 2004. The other 6 locations had unemployment rates greater than the U.S. rate. As figure 10 shows, 26 (66 percent) of the 38 BRAC-affected localities situated east of the Mississippi River had unemployment rates that were less than or equal to the U.S. rate of 5.8 percent during January through July 2004. The other 12 locations had unemployment rates that were greater than the U.S. rate. As figure 11 shows, 11 (46 percent) of the 24 BRAC-affected localities situated west of the Mississippi River had average annual real per capita income growth rates that were greater than the U.S. average growth rate of 2.2 percent during 1999 through 2001. The other 13 locations had rates that were below the U.S. average rate, of which 2 locations experienced a negative growth rate. As figure 12 shows, 19 (50 percent) of the 38 BRAC-affected localities situated east of the Mississippi River had average annual real per capita income growth rates that were greater than the U.S. average growth rate during 1999-2001. The other 19 locations had rates that were below the U.S. average rate, of which 1 had a negative growth rate. Military Base Closures: Assessment of DOD’s 2004 Report on the Need for a Base Realignment and Closure Round. GAO-04-760. Washington, D.C.: May 17, 2004. Military Base Closures: Observations on Preparations for the Upcoming Base Realignment and Closure Round. GAO-04-558T. Washington, D.C.: March 25, 2004. Military Base Closures: Better Planning Needed for Future Reserve Enclaves. GAO-03-723. Washington, D.C.: June 27, 2003. Military Base Closures: Progress in Completing Actions from Prior Realignments and Closures. GAO-02-433. Washington, D.C.: April 5, 2002. Military Base Closures: DOD’s Updated Net Savings Estimate Remains Substantial. GAO-01-971. Washington, D.C.: July 31, 2001. Military Bases: Status of Prior Base Realignment and Closure Rounds. GAO/NSIAD-99-36. Washington, D.C.: December 11, 1998. Military Bases: Review of DOD’s 1998 Report on Base Realignment and Closure. GAO/NSIAD-99-17. Washington, D.C.: November 13, 1998. Military Bases: Lessons Learned from Prior Base Closure Rounds. GAO/NSIAD-97-151. Washington, D.C.: July 25, 1997. Military Bases: Closure and Realignments Savings Are Significant, but Not Easily Quantified. GAO/NSIAD-96-67. Washington, D.C.: April 8, 1996. Military Bases: Analysis of DOD’s 1995 Process and Recommendations for Closure and Realignment. GAO/NSIAD-95-133. Washington, D.C.: April 14, 1995. Military Bases: Analysis of DOD’s Recommendations and Selection Process for Closures and Realignments. GAO/NSIAD-93-173. Washington, D.C.: April 15, 1993. Military Bases: Observations on the Analyses Supporting Proposed Closures and Realignments. GAO/NSIAD-91-224. Washington, D.C.: May 15, 1991. Military Bases: An Analysis of the Commission’s Realignment and Closure Recommendations. GAO/NSIAD-90-42. Washington, D.C.: November 29, 1989. In addition to the individual named above, Nancy Benco, Paul Gvoth, Warren Lowman, Tom Mahalek, Dave Mayfield, Charles Perdue, Stephanie Stokes, and Dale Weinholt made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
As the Department of Defense (DOD) prepares for the 2005 base realignment and closure (BRAC) round, questions continue to be raised about the transfer and environmental cleanup of unneeded property arising from the prior four BRAC rounds and their impact on cost and savings and on local economies. This report, which is being issued to the defense authorization committees that have oversight responsibility over defense infrastructure, describes DOD's progress in implementing prior BRAC postclosure actions. It addresses (1) the transfer of unneeded base property to other users, (2) the magnitude of the net savings accruing from the prior rounds, (3) estimated costs for environmental cleanup of BRAC property, and (4) the economic recovery of communities affected by base closures. As of September 30, 2004, DOD had transferred about 72 percent of 504,000 acres of unneeded BRAC property to other entities. This amount represents an increase over the 42 percent that GAO previously reported in April 2002 and is primarily attributable to two large property transfers. When leased acreage is added to the transferred property, the amount of unneeded BRAC property in reuse rises to 90 percent. Transfer of the remaining acreage has been delayed primarily because of environmental cleanup requirements. DOD data show that the department had generated an estimated $28.9 billion in net savings or cost avoidances from the prior BRAC rounds through fiscal year 2003 and expects to save about $7 billion each year thereafter. These savings reflect money that DOD would likely have spent to operate military bases had they remained open. Although the savings are substantial, GAO found that the estimates are imprecise because the military services have not updated them regularly despite GAO's prior reported concerns on this issue. This issue needs to be addressed in the 2005 round. Further, the estimates do not reflect all BRAC-related costs, such as $1.9 billion incurred by DOD and other federal agencies for redevelopment assistance. While estimated costs for environmental cleanup at BRAC sites remain within the range of prior estimates, these costs may increase if unknown or undetermined future cleanup liabilities, such as additional unexploded ordnance or other harmful contaminants, emerge. Through fiscal year 2003, DOD had spent about $8.3 billion on BRAC environmental cleanup. It expects to spend another $3.6 billion to complete the cleanup work. While most nearby communities have recovered or continue to recover from base closures, they, as well as other communities, have felt some impact from the recent economic downturn where the strength of the national, regional, or local economy can affect recovery efforts. Yet, key economic indicators--unemployment rates and average annual real per capita income growth rates--show that BRAC communities are generally faring well when compared with average U.S. rates. Of 62 communities that GAO studied, 69 percent had unemployment rates equal to or lower than the U.S. average and 48 percent had income growth rates higher than the national average.
The National Performance Review (NPR) was begun by the President in March 1993 and is a major management reform initiative by the administration under the direction of the Vice President. In September 1993, the Vice President published 384 NPR recommendations designed to make the government work better and cost less. We have commented on these recommendations and discussed their implementation in two previous reports. “to pick a few places where we can immediately unshackle our workers so they can re-engineer their work processes to fully accomplish their missions—places where we can fully delegate authority and responsibility, replace regulations with incentives, and measure our success by customer satisfaction.” In response to the Vice President’s request, dozens of federal agencies have established reinvention labs throughout the government. Although similar in some respects to pilot projects that have been used on numerous occasions in federal agencies to test new procedures, the reinvention lab concept originated at the Department of Defense (DOD) during the mid-1980s. DOD’s model installation program was initiated by the then Deputy Assistant Secretary of Defense for Installations (DAS/DI). The program focused on reducing the amount of regulation governing administrative functions at certain military installations. Through this program, DOD identified hundreds of pages of regulations governing military installations that it believed did not make sense or wasted time and money. The DAS/DI waived as many DOD regulations as possible and allowed the base commanders to operate the installations in their own way. According to an NPR official, the program was enthusiastically supported by the installations, which began to improve not only administrative operations but also mission-related functions. The model installations program became so successful that DOD opened the program to all military installations in March 1986. In early 1993, the DAS/DI was appointed the Director of the overall NPR effort. According to an NPR official, the Director suggested to the Vice President that “reinvention labs” similar to the model installations be established within all federal agencies as part of the administration’s governmentwide effort to improve government operations and save money. The NPR effort is headed by the Vice President, but the day-to-day operation of the effort is the responsibility of an NPR task force that comprises staff from various federal departments and agencies. The staff are assigned to the task force for a temporary period of time, usually 3 to 6 months. The total number of staff assigned to the task force has varied over time but has usually been between 40 and 60. About 10 of these staff have worked on the NPR task force since it was established in 1993, but even they technically remain employees of their home agencies. The NPR task force has attempted to advertise and promote the reinvention lab effort in a variety of ways. For example, the task force has sponsored or cosponsored several reinvention lab conferences (with another scheduled for March 25-27, 1996) and has periodically published information about the labs. It has also developed a lab database using information voluntarily submitted by the labs identifying their agencies, location, contact persons, and other general information about the reinvention efforts. However, consistent with its overall philosophy, the NPR task force has avoided control mechanisms and has consciously taken a “hands-off” approach to the development and oversight of the labs. NPR officials said it is up to each agency to decide whether it will have any labs and, if so, how they should be structured and operated. The NPR task force has not required agencies to notify it when labs are created or to report to NPR on their progress. In fact, the task force recommended that labs not be required to file progress reports with their agencies’ management. Overall, agencies have been allowed to operate reinvention labs as they believe appropriate, without top-down control or interference from the task force. The task force views its role as encouraging federal agencies to establish reinvention labs and highlighting those labs that are “success stories” and that focus on customer service. The Office of Management and Budget (OMB) has played less of a role in the reinvention lab effort than the NPR task force. OMB has not been involved in the labs’ designation or their oversight and does not collect or disseminate information about the labs. However, OMB officials said that OMB program examiners are generally aware of the existence of labs in the agencies for which the examiners have responsibility. OMB is responsible for providing management leadership across the executive branch and therefore can be important to the implementation of NPR management improvement ideas. In fact, OMB has already begun to play that role in some areas. For example, during the fiscal year 1996 budget cycle, OMB stressed agency downsizing plans and the use of performance information—key elements of the overall NPR effort—during its reviews of agencies’ budget submissions. OMB itself was “reinvented” as part of the NPR effort when its budget analysis, management review, and policy development roles were integrated into a new structure designed to improve the decisionmaking process and the oversight of executive branch operations. After the Vice President’s April 1993 letter, each federal agency was made responsible for designating organizational units, programs, or new or ongoing initiatives as reinvention labs. Although their comments in the intervening period provide some indication of what kinds of reinvention projects they envisioned, neither the Vice President nor the NPR task force has established specific criteria defining a lab. “e hope this process will involve not only the thousands of federal employees now at work on Reinvention Teams and in Reinvention Labs, but millions more who are not yet engaged. We hope it will transform the habits, culture, and performance of all federal organizations.” In October 1993, representatives from reinvention labs at a number of agencies attended a conference in Hunt Valley, MD, at which they discussed their ideas and experiences. One of the key topics of discussion at the conference was, “What is a reinvention lab?” The conference proceedings stated that a lab “is a place that cuts through ’red tape,’ exceeds customer expectations, and unleashes innovations for improvement from its employees.” The proceedings listed five areas of consensus about the characteristics of a reinvention lab: (1) vision (continually improving value to customers); (2) leadership (unleashing the creativity and wisdom in everyone); (3) empowerment (providing employee teams with resources, mission, and accountability); (4) incentives (offering timely “carrots” for innovation and risk-taking); and (5) accountability (ensuring the customer is always right). The Vice President said that reinvention labs were doing the same things as the rest of the agencies, “only they’re doing them faster.” Several of the Vice President’s and NPR officials’ comments about the reinvention labs centered on the labs’ ability to avoid complying with regulations that could encumber their efforts. As noted previously, the Vice President told agencies in his April 1993 letter that regulations should be replaced with “incentives” in the labs. NPR officials also told the reinvention labs that they should be provided freedom from regulations. A number of the comments at the Hunt Valley conference focused on eliminating red tape and unnecessary regulations. Another recurring theme in the Vice President’s comments and NPR publications has been the need to communicate about lab results. At the Hunt Valley conference, the Vice President said that reinvention labs “will need to share what they learn and forge alliances for change.” A 1993 NPR report also voiced support for spreading reinvention ideas. Reinvention labs are but one of a number of efforts initiated in recent years by the administration or Congress to reform the operation of the federal government. Because these other reform efforts were being implemented at the same time that the reinvention labs were being initiated, they may have affected the labs’ development. For example, the Government Performance and Results Act (GPRA), enacted in August 1993, was designed to improve the effectiveness and efficiency of federal programs by establishing a system to set goals for program performance and to measure results. GPRA requires federal agencies to (1) establish 5-year strategic plans by September 30, 1997; (2) prepare annual plans setting performance goals beginning with fiscal year 1999; and (3) report annually on actual performance toward achieving those goals, beginning in March 2000. As a result of GPRA’s requirements, greater emphasis is to be placed on the results or outcomes of federal programs. OMB is responsible for leading the GPRA implementation effort and has designated more than 70 programs and agencies as pilots. As noted previously, the reinvention lab effort was initiated in 1993 at about the same time that the original NPR recommendations were being developed. As part of that effort, the 1993 NPR report said that the civilian, nonpostal workforce could be reduced by 252,000 positions during a 5-year period. The report said these cuts would be made possible by changes in agencies’ work processes and would bring the federal workforce to its lowest level since the mid-1960s. In 1994, Congress enacted the Federal Workforce Restructuring Act, which mandated an even greater 5-year workforce reduction of 272,900. The September 1995 NPR status report estimated that more than 160,000 jobs had already been eliminated from the federal government. In December 1994, the administration launched a second phase of the NPR effort, referred to as NPR II. One aspect of NPR II was an agency-restructuring initiative in which the Vice President asked the heads of each agency to reexamine all of their agencies’ functions and determine what functions could be eliminated, privatized, devolved to state or local governments, or implemented in a different way. The agencies developed a total of 186 agency-restructuring recommendations, which were aggregated and published in the September 1995 NPR status report. For example, the Department of Housing and Urban Development (HUD) proposed consolidating 60 grant programs into 3, giving greater flexibility to governors and mayors. There have also been several recent congressional proposals to reform the federal government. For example, in May 1995, the Senate Committee on Governmental Affairs held hearings on proposals for the elimination of the Departments of Commerce, Housing and Urban Development, Energy, and Education. In February 1995, the House Committee on Economic and Educational Opportunities proposed merging the Departments of Education and Labor and the Equal Employment Opportunity Commission into a single department. There has also been a proposal to combine elements of the Departments of Commerce and Energy with the Environmental Protection Agency and other independent agencies to create a Department of Science. Although reinventing government and the NPR effort have been frequently discussed in the professional literature, relatively little has been written about reinvention labs. In the Brookings Institution’s Inside the Reinvention Machine: Appraising Governmental Reform, one author briefly mentioned several agencies’ labs and said they were but one component in the agencies’ reinvention efforts. She also said the labs frequently were “bottom-up” reform processes, sending a message to the staff that we’re all in this together. Another author in this volume said that the labs “represent exciting innovations in the federal government” and that they were generating “an impressive amount of fresh ideas and information about how government workers can do their jobs better.”However, he also noted that there had been no systematic survey of what the labs had accomplished. An article exclusively about reinvention labs described the lab effort as being a struggle between advocates for change and those individuals with power within the agencies. The author describes labs at several agencies (e.g., the Departments of Agriculture and Education and the General Services Administration), noting that in some cases entire agencies have become labs (e.g., the Agency for International Development and the Federal Emergency Management Agency). Other articles have briefly discussed the activities of a few reinvention labs, but no research efforts have systematically collected information about all of the labs. We initiated this review of the reinvention labs as part of our ongoing body of work examining NPR issues. The objectives of this review were to determine (1) the focus and developmental status of the labs, (2) the factors that hindered or assisted the development of the labs, (3) whether the labs were collecting performance data, and (4) whether the labs had achieved any results. We addressed all of these objectives by conducting a telephone and fax survey of all of the reinvention labs. However, to design and conduct the survey, we had to obtain preliminary information from the NPR task force, agencies, and some of the labs themselves. We obtained information from the NPR task force’s database about the labs’ locations, their developmental status, subject areas covered, and a contact person at each of the lab sites. As of February 1995, NPR’s database indicated that there were 172 labs. However, NPR’s database did not include some labs and double-counted others. After contacting officials responsible for the labs in each of the agencies that the task force reported had ongoing efforts, we later concluded there were 185 labs active as of early 1995. The NPR task force told us that the regional labs were further along in the implementation process than the labs in the Washington, D.C., area. Therefore, we conducted a structured interview of the regional labs by telephone in the summer of 1994 to obtain information on their status, the type of procedure or process being reinvented, and any results the labs had produced. Using the information obtained from these contacts, we selected 12 labs to visit on the basis of two criteria: (1) labs that represented a variety of procedures or processes being reinvented (e.g., procurement, personnel, financial management, or general operations); and (2) labs that had generally progressed to at least the planning stage. We visited each of these 12 labs and obtained detailed information concerning each of our objectives. We developed case studies on each of the 12 labs and subsequently sent them to both the lab officials from whom we gathered the data and the agencies’ headquarters for their review and comment. Their comments were incorporated into the final version of the case studies. (For a list of these labs, see app. I. See apps. II through XIII for the full case studies.) We then conducted two surveys of all 185 of the labs—first a telephone then a fax survey—and received responses from 181 of the labs (98 percent). The telephone survey was primarily designed to obtain a general description and overview of the labs’ operations. We sent the second survey to the respondents by fax after the completion of the telephone survey. If a lab focused on more than one area for reinvention (i.e., the lab was engaged in multiple lines of effort), we asked the respondent to focus his or her answers to the fax survey on the lab’s primary line of effort. (See app. I for a list of the labs by agency and subject category.) The fax survey consisted primarily of structured multiple-choice items that focused on each of our objectives. (See app. XIV for copies of the telephone and fax surveys.) Questions focused on such issues as the lab’s developmental status and the nature and extent of performance data being collected. We also asked questions about a number of factors that could affect the labs’ development—e.g., waivers from certain regulations, communication with other labs and the NPR task force, and agency management support. On the basis of comments made by lab officials during our site visits, we selected these factors for specific follow-up in the survey phase of our work. They may not cover all possible factors affecting lab development. We did not independently verify the information we received from any of the information sources—the NPR task force, the site visits, the telephone survey, or the fax survey. For example, if a survey respondent said that his or her lab had collected performance data or had communicated with other labs, we did not assess those data or check with the other labs. However, we did collect some relevant documents or data regarding these issues during our site visits to the 12 labs. We conducted our work between June 1994 and August 1995 in accordance with generally accepted government auditing standards. The telephone and fax surveys were administered between April and July 1995, so the survey data are as of those dates. Although we attempted to survey all of the reinvention labs in the federal government, we cannot be sure that the 185 labs we contacted included all agencies’ labs. Others may have been active at the time of our survey, but we were not aware of them either because of the lack of a specific definition for reinvention labs, the NPR task force did not keep an accurate record on the number of operating labs, or we were denied access to agency officials. In one instance, we were unable to verify the existence of a lab appearing on NPR’s list as being at the Central Intelligence Agency (CIA) because a CIA official said that it was their standard policy to deny GAO access to CIA reinvention activities. Also, other labs may have been developed since the survey was conducted. We submitted a draft of each case study to the relevant lab and agency headquarters officials for their review and have incorporated their comments into the final version of each appendix. On December 27, 1995, we submitted a draft of this report to the Vice President (as head of the NPR effort) and to the Director of OMB for their review and comment. Their comments are described at the end of chapter 5. In the reinvention labs, agencies were supposed to experiment with new ways of doing business, and the NPR task force purposely gave agencies wide latitude in how the labs could be structured and what topics they could address. Agencies were also free to build on existing management reform efforts or to start their reinvention labs from scratch. Aside from the general parameters of customer service and employee empowerment, few restrictions were placed on the labs’ initiation or development. Federal agencies responded to the Vice President’s call for the creation of reinvention labs in earnest. Labs were designated in dozens of agencies and in virtually every region of the country. Our survey indicated that the labs varied widely in terms of their origin, their stage of development at the time of the survey, the number of reinvention efforts addressed by each lab, and the subject areas covered by the labs. Also, although many of the labs shared a common customer service focus, they differed in who they defined as their customers. Finally, the survey indicated that a number of the labs’ efforts actually began before the NPR effort was initiated. As table 2.1 shows, the 185 reinvention labs that had been designated at the time of our survey were spread across 26 federal departments, agencies, and other federal entities. DOD had the most labs (54), followed by the Department of the Interior (DOI) (28). The number of labs in each agency was not always related to its size. Some large agencies had relatively few labs (e.g., the Department of Veterans Affairs); while some comparatively small agencies had initiated a number of labs, e.g., the General Services Administration (GSA). Some agencies that serve the public directly and that had been the subject of both the 1993 and 1995 NPR recommendations had not started any labs at the time of the survey (e.g., the Small Business Administration). Figure 2.1 and table 2.2 show the number of reinvention labs at the time of our survey within each standard federal region. As the figure illustrates, labs had been established in virtually every federal region, but the mid-Atlantic region (region 3) had over two-thirds of the labs. Most of these labs were located in the Washington, D.C., area, but some affected operations in other areas. Relatively few labs were located in the northeast (regions 1 and 2) or the northwest (region 10). Some of the labs were operated in multiple locations within a single region. For example, one HUD lab effort had several sites that included HUD’s offices at Chicago, Milwaukee, and Cleveland. (See app. VIII for a discussion of this lab.) Other labs had multiple sites located in different standard federal regions. For example, GSA’s Federal Supply Service lab was headquartered in New York City (region 2), but some aspects of the lab were being implemented in Boston (region 1). (See app. VI for a discussion of this lab.) We asked the survey respondents why their labs were initiated, allowing them to designate more than one closed-ended response category and/or add additional reasons. They indicated that the reinvention efforts were generally focused and uncoerced. As shown in figure 2.2, nearly two-thirds of the respondents said that they were trying to address a specific problem, and over half indicated that they volunteered to become a lab.Only 13 percent of the respondents reported that they were told to pursue their labs by agency officials. Forty percent said their labs were an outgrowth of quality improvement efforts in their agencies. We also asked the respondents when their labs’ efforts actually began, regardless of when the labs were officially designated as labs. The lab start dates varied widely, ranging from as early as 1984 to as recently as March 1995—1 month before the start of our survey. About one-third of the respondents indicated that their labs’ efforts began before the announcement of the NPR effort in March 1993. The early beginning of so many lab efforts is not surprising given that 40 percent of the respondents said that their labs originated in their agencies’ quality improvement efforts—efforts that started in some federal agencies in the early 1990s.For example, lab officials at the sites we visited told us the following: • GSA’s reinvention labs in two regional offices originated with the offices’ quality assurance programs that began in 1988 and 1989. (See app. VI and app. VII.) • The Internal Revenue Service’s (IRS) reinvention lab in Helena, MT, began as a joint quality improvement process launched in 1988 by IRS and the National Treasury Employees Union. (See app. XI.) • The United States Department of Agriculture’s (USDA) lab on baggage inspection operations in Miami started in 1989 as an effort to improve productivity as staff resources declined and the workload increased. (See app. II.) • DOI’s efforts to improve information dissemination at the U.S. Geological Survey began in 1986 when it attempted to establish a more efficient and responsive order entry, inventory control, and distribution system. (See app. X.) Officials from 14 of the labs we surveyed said that they sought lab designations for existing management improvement efforts because the officials thought such designations would give them more latitude to make changes and provide greater visibility for their efforts. For example, one of the survey respondents said that reinvention lab designation provided the lab team with the momentum needed to overcome common barriers to change. During one of the site visits, an official from HUD’s lab on reinventing the field operations of the Office of Public and Indian Housing said that before its lab designation “we could not get in the door at headquarters.” However, he said that after the lab’s designation “the waters parted” and that headquarters officials became interested in the new oversight approach. (See app. VIII for a discussion of this lab.) Other respondents said that being designated as a reinvention lab provided the mechanism by which they could seek waivers from cumbersome rules and regulations that had been an impediment to previous management reform efforts. The 1993 NPR report called for a new customer service contract with the American people—a new guarantee of effective, efficient, and responsive government. The report also stated that federal agencies were to provide customer service equal to the best in business. In his April 1993 letter calling for the creation of reinvention labs, the Vice President said the labs were to measure their success by customer satisfaction. Consistent with this goal, 99 percent of our survey respondents said that customer service improvement was a primary goal of their labs to at least “some extent”; 93 percent of the respondents said this was true to a “great” or “very great” extent. (See ch. 4 for information on the labs’ collection of performance data.) The survey respondents frequently indicated that the changes that were occurring in their reinvention labs represented a substantially different mode of operation, not simply a minor change in procedures. Over 65 percent of the respondents said that their reinvention labs involved changing the way staff in their agencies did their work to a “great” or “very great” extent. Over 20 percent said that changes in work processes occurred to a “moderate” or “some” extent. Lab officials reported the following examples: • The Defense Logistics Agency’s (DLA) lab on inventory management made significant changes in its work processes and staff roles. DLA officials said they shifted from acting as a wholesaler who buys, stores, and sells inventory to acting as a broker who obtains the most efficient and effective military support for its customers through any appropriate mechanism—including the use of private-sector vendors to store and distribute inventories. (See app. IV.) • The U.S. Geological Survey’s information dissemination lab improved internal communications and job processes by combining the organizational unit that took map purchasing orders with the unit that filled the orders and by cross-training staff. (See app. X.) • GSA’s mid-Atlantic regionwide lab improved customer service in the region’s Public Buildings Service office by shifting staff from working as teams of specialists responsible for moving projects through their segments of a work process to working as multidisciplinary teams made up of specialists responsible for processing one project. (See app. VII.) About two-thirds of the respondents who said that their labs were involved in changing the way staff did their work indicated that the changes improved customer service to a “great” or “very great” extent. However, only 20 percent of the respondents indicated that these changes required substantial alterations in their agencies’ personnel systems. The labs’ definition of their customers varied depending on the lab. Given the opportunity to choose more than one response category, the respondents described their labs’ customers as the general public; their agencies’ constituencies; another government organization (e.g., federal, state, or local); and/or other offices within their own agencies. Almost two-thirds of the respondents said their labs’ customers were both internal and external to the government. For example, officials in HUD’s lab on reinventing the field operations of the Office of Public and Indian Housing said that their lab’s customers included the residents of the public housing units and the local governments’ public housing authorities who operated the housing units. (See app. VIII.) Overall, the two most frequently selected response categories for customers were “another government organization” and “other offices within the lab’s agency”; 18 percent of the respondents said that these were their labs’ only customers. For example, the Department of Commerce’s reinvention lab in Boulder, CO, defined its customers as the scientists and engineers working within the department’s scientific laboratories. (See app. III.) We asked the survey respondents to characterize their labs’ stage of development in one of five categories: (1) planning stage (no implementation begun), (2) implementation begun but not completed at the lab site, (3) implemented at the lab site only, (4) implemented at the lab site and planning or implementation begun at other sites, (5) implemented at the lab site and at other sites, or (6) other. As figure 2.3 shows, the respondents were equally divided between those who said that their labs had been at least implemented at the lab site (responses 3 through 5) and those that had not gotten to that stage of development (responses 1 and 2). The most common single response (35 percent) was “implementation begun but not completed.” Planning stage or implementation incomplete (49%) Implementation at or beyond site (49%) We also asked the respondents whether their labs were focused on a single effort or multiple lines of effort. Nearly two-thirds (63 percent) of the respondents said that their reinvention labs had only one line of effort. As figure 2.4 shows, DOD labs reported they were much more likely to have multiple lines of effort (58 percent) than were civilian labs (29 percent). A line of effort is not the same as a subject category. For example, a lab with only one line of effort can address a variety of subjects, including personnel management, procurement, information technology, and financial management. Nearly three-fourths of the survey respondents indicated that their labs were focused on more than one subject area. The most commonly cited subject area was operations (72 percent), followed by information technology (60 percent), personnel (45 percent), procurement (45 percent), and financial management (39 percent). Examples of these subject areas include the following: In an operations lab, USDA officials examined ways to improve the operation of their airport baggage inspection program by permitting more self-direction by employees and allowing them to identify ways to improve procedures. (See app. II.) • An information technology lab explored the use of electronic media, such as the Internet, E-mail servers, fax on demand, and the Worldwide Web to disseminate information on the latest medical research from sources around the world. • A procurement lab established teams of customers, contractors, and contract administration officials to identify areas for process improvements. The lab was also trying to develop a “risk management” approach to contract administration in which the lab’s level of contractor oversight would be linked to an assessment of the contractor’s performance. In addition to the traditional subject area categories previously mentioned, analysis of survey respondents’ comments in the survey and during our site visits indicated three crosscutting areas of interest: (1) marketing services and expertise; (2) using electronic commerce (EC) and electronic data interchange (EDI) to improve operations, such as procurement and benefit transfers; and (3) developing partnerships with other levels of government, the private sector, and customers. (See app. I for a complete list of these reinvention labs.) The 1993 NPR report advocated creating competition between in-house agency support services and what it termed “support service enterprises”—federal agencies that offer their expertise to other agencies for a fee. Officials from 20 reinvention labs said that their labs were planning or implementing these kinds of reforms, using marketing techniques to expand their customer base. Examples of marketing services include the following: • Two of the labs were department training centers that were attempting to become self-sufficient by charging fees for their services. In addition to marketing their training courses, officials from both centers said they were contracting with other agencies to provide consulting services. • One respondent said that his lab was experimenting with franchising its contracting services to civilian agencies. Lab officials developed a standard rate to be charged for their services and had signed agreements with other agencies to provide those services. • One respondent said that his lab had successfully marketed its organic waste disposal services to other federal, state, and local agencies. He also said that the lab generated additional income by recycling these wastes for resale as compost. One DOD official said that existing statutes had prevented his lab from marketing its duplicating services to non-DOD agencies. He said Congress requires federal agencies to contract printing and duplicating to the private sector via the Government Printing Office (GPO), which applies a surcharge. However, he said that one of our recent reports noted that some of the agency’s in-house duplicating services were about 57 percent cheaper than GPO’s prices. The 1993 NPR report recommended that federal agencies adopt EC and EDI techniques that the private sector had been using for some time because, NPR said, they can save money. Respondents for 38 labs said that their labs were in the process of implementing EC and EDI systems to enable them to easily transfer information on financial and procurement transactions and on client services and benefits. For example, DLA officials said the agency was using EC and EDI to develop a paperless, automated system for critical documents in the contracting process, including delivery orders, requests for quotations, bid responses, and awards. They said that this system would ultimately provide a standard link among DLA, its customers, and suppliers in the private sector. (See app. IV.) At the time of our survey, 54 labs reported attempting to develop partnerships with other levels of government, labor organizations, contractors, and/or their customers. Several of these partnership efforts focused solely on intra- or intergovernmental relations. For example, one official said his lab was working with other federal agencies and state and local government agencies to design an ecosystem management strategy. Another lab was focused on developing an automated prisoner processing system for use by five federal law enforcement entities. Officials for 16 other labs also said that their labs were developing partnerships with contractors, academia, or the private sector. For example, at the Department of Energy’s (DOE) Hanford reinvention lab, the department entered into an agreement allowing a private company to disassemble and use excess equipment, saving the government $2.6 million in disposal costs. In another lab, agency officials and contractors formed teams to rework contracting processes and shift oversight from an adversarial position to a team approach so that both the agency and its contractors could lower oversight costs. Nine respondents said that their labs were establishing partnerships with employee unions. For example, officials at the Commerce Department’s Boulder reinvention lab said that their efforts had built a strong union-management relationship by changing the rigid work environment so that skilled workers would be able to work together as teams and supervisors could perform more as coaches than managers. Reinvention labs were intended to be agents of change in the federal government. As such, they have faced many of the same challenges as other change agents—eliminating rules that stand in the way of progress, ensuring top management support, communicating with others attempting similar changes, and coping with cultural resistance. However, some of the challenges the reinvention labs faced were difficult, such as attempting to initiate new ideas or new work processes while their organizations were shrinking and while other management reform efforts were being implemented. We asked the survey respondents to provide information on a variety of factors that could have hindered or helped the development of the labs, and some of the results were contrary to our initial expectations. For example, many of the lab officials said they had not sought waivers from regulations, even in labs that were fully implemented at the lab site. Few reported substantial communication with other labs or with the NPR task force. However, over 80 percent enjoyed top management support. Analysis of the survey responses also indicated other factors that the respondents said affected the development of their labs. One of the NPR effort’s recurring themes is that regulations and red tape stifle the creativity and ability of federal workers to solve problems and improve service to the public. At the Hunt Valley reinvention lab conference in October 1993, NPR officials encouraged the labs to request waivers from requirements imposed on them “which are barriers to reinvention.” The Vice President said that he was looking to the reinvention labs to identify “barriers that stand in the way of getting the job done in the right way” and to “drive out rules and regulations that just don’t make sense anymore.” A September 1993 NPR report noted that carefully crafted waiver requests and prompt review of these requests can be “experiments for government’s reinvention.” Regulations can come from a variety of sources. Some regulations are promulgated by central management agencies—e.g., OMB, GSA, or the Office of Personnel Management (OPM)—and apply to all or virtually all federal agencies. Other regulations are issued by line agencies and apply only to the issuing agency. In the reinvention lab effort, the entity that establishes a regulation is to receive and rule on any waiver requests. Although they were encouraged to seek regulatory waivers, 60 percent of the survey respondents who answered the question said that their labs had not sought such waivers. Of these respondents, about half said that they considered seeking a waiver, but they did not do so; half said they had not even considered seeking a waiver. When asked why their labs did not seek waivers, the respondents most commonly indicated that waivers were not needed to accomplish their labs’ goals (54 percent) or that it was too early in the reinvention process to seek waivers (30 percent). (Respondents were allowed to select more than one response category to this question.) The relationship between the labs’ stage of development and their propensity to seek waivers was supported by other data in the survey. As figure 3.1 shows, labs that were at least fully implemented at the lab site were almost twice as likely to have requested a waiver than labs that had not reached that stage of development. However, nearly half of the fully implemented labs had not sought any regulatory waivers at the time of the survey. Over two-thirds of the respondents for the fully implemented labs that had not sought a waiver said that a specific waiver was not needed to accomplish their labs’ goals, and they cited a variety of reasons. For example: In some labs, the agencies reported that constraints on pre-lab operations were nonregulatory and that removal of the constraints did not require a waiver. For example, officials from one reinvention lab planned to request a general waiver from using GSA’s supply schedule to enable the site’s supply room to seek the best value for each product it provides. According to an official, this request was dropped because lab officials discovered that procurement rules allowed agencies to ignore the supply schedule if a local source can provide the product at a lower price. In other labs, a blanket waiver of internal regulations, or a delegation of authority, provided by agency headquarters eliminated the need for individual waiver requests. In blanket waivers, agency headquarters typically granted labs the authority to make their own decisions on which agency-specific rules to eliminate without asking for prior permission. For example, GSA gave the Mid-Atlantic Regional Administrator a blanket waiver from nonstatutory internal rules and regulations that might hinder the development of the region’s lab. (See app. VII.) In another lab, officials told us that passage of the Federal Acquisition Streamlining Act removed the legislative barriers to the lab’s reform efforts. Therefore, lab officials said they did not need to go forward with their proposals to waive contracting rules and regulations. The survey respondents indicated that their labs had requested nearly 1,000 waivers from regulatory requirements. Some respondents said their labs had requested only one waiver, but other labs reported requesting dozens of waivers. The respondents also indicated that their labs’ waiver requests involved regulations in a range of subject areas. One-third of all the waivers requested involved agency work process rules or regulations, with the remaining two-thirds about equally divided between personnel rules, procurement rules, and other rules. Examples of agency work process regulations include the following: • Officials from GSA’s office products lab requested a waiver from an agency work process regulation requiring the use of a certain quality assurance technique so that they could replace it with another, reportedly better, technique. (See app. VI.) • The reinvention teams at the U.S. Bureau of Mines’ reinvention lab proposed 21 changes to departmental procedures, such as altering the review process for computer equipment acquisition, removing restrictions on the use of local attorneys to process patent paperwork, and eliminating one level of supervision within the lab’s research center. (See app. IX.) • Contracting officials from the Department of Veterans Affairs’ (VA) reinvention lab in Milwaukee requested nine waivers from both departmental regulations and governmentwide Federal Acquisition Regulations (FAR). Eight of these waivers were pending at the time of our review, including an authorization to remove annual contracts from the current fiscal year cycle and to permit the lab to participate with private-sector purchasing groups in best value purchasing. (See app. XII.) As shown in figure 3.2, over half of the waivers the labs sought were reported to be from agency-specific rules issued by the respondent’s own agency, and nearly one-third of the requested waivers were from governmentwide rules issued by central management agencies. The respondents said the remaining 16 percent of the waiver requests focused on rules from other sources (e.g., executive memorandum), or the respondents were unsure of the source of the regulation from which the waiver was requested. The survey respondents frequently said that it was difficult to obtain waivers from both governmentwide and agency-specific regulations, but they indicated that waivers of governmentwide rules issued by central management agencies, such as GSA, OMB, or OPM, were the most difficult to obtain. More than three-fourths of the respondents who offered an opinion said it was difficult to obtain a waiver from governmentwide rules, with nearly twice as many choosing the “very difficult” response category compared with the “somewhat difficult” category. Only 7 percent of the respondents said it was “easy” to obtain waivers from governmentwide rules. In contrast, 50 percent of the respondents who sought a waiver from rules issued by their own agencies said such waivers were “difficult” to obtain. Of these respondents, most said obtaining agency-specific waivers was only “somewhat difficult,” and 31 percent said it was “easy.” The difficulty survey respondents reported in receiving waivers from governmentwide regulations was also indicated by waiver approval rates. As shown in figure 3.3, lab officials said that over 60 percent of their labs’ requests for waivers from agency-specific rules had been approved at the time of our survey, compared with only about 30 percent of the requests for waivers from governmentwide regulations. Lab officials also reported other types of problems when they requested regulatory waivers. For example, officials from the Pittsburgh Research Center lab in the U.S. Bureau of Mines said the lab team spent a substantial amount of time concentrating on waiver requests that were beyond the scope anticipated by NPR officials. The lab team said they were not clearly warned by DOI management that “overturning statutes was off-limits” when requesting waivers. (See app. IX.) Officials from three different reinvention labs said that they found it difficult to use the delegation of authority to waive regulations that had been given to them by their agencies’ headquarters. For example, officials from these labs said that they had to obtain approval from legal counsels to use that authority and that getting this approval proved to be just as time-consuming as it would have been to get a specific waiver from headquarters. Officials from the Commerce Department’s Boulder reinvention lab said that they tried to use their waiver authority to develop alternative procedures to abolish three staff positions. In keeping with one of the lab’s areas of emphasis to build management and labor partnerships, field managers worked with the local union president to develop an alternative procedure that was less disruptive than the traditional one. However, one lab official said that even though the lab had been given authority to deviate from procedures, headquarters officials required extensive documentation and heavily reviewed the proposal. The lab official said as many as 19 headquarters officials were involved in reviewing and approving every aspect of these procedural changes. (See app. III.) Top management support is crucial to the successful management of changes within organizations, particularly changes of the magnitude envisioned by the Vice President. Top management can provide needed resources and remove barriers that may stand in the way of organizational changes. On the other hand, managers can also negatively affect changes by withholding needed resources and erecting barriers that effectively prevent changes from occurring. Eighty-three percent of the survey respondents who expressed an opinion said top management in their agencies (i.e., Office of the Secretary/Agency Head) were supportive of their reinvention labs, and 77 percent said that upper level career managers were also supportive. In some cases, lab officials said that top management was the leading force behind the reinvention labs. For example, staff developing DOI’s U.S. Geological Survey lab said their lab proposal was approved by headquarters because of the active support of the department’s leadership. (See app. X.) DLA officials said that their top management pushed for a total overhaul of the agency before the start of the NPR effort and that the reinvention labs provided a vehicle for enhancing the visibility of these reforms. (See app. IV.) An official from IRS’ reinvention lab said that IRS management expressed its support for that lab by approving a memorandum of understanding between the lab and its regional office. Included in the memorandum was a commitment from the regional commissioner to provide oversight and program support to the lab, to reduce the reporting requirements on front-line managers, and to offer assistance in implementing the reinvention ideas. (See app. XI.) However, in a few cases labs reported that they were adversely affected by a lack of top management support or attention. For example, one lab official said his lab initially had a high-level supporter in headquarters who could get waivers and delegations of decisionmaking authority approved. However, he said that when the lab lost this supporter, other headquarters officials began to actively resist the lab’s efforts, and some even engaged in what he termed “pay-back.” Another survey respondent said managers in his agency were inattentive to the agency’s lab. The respondent also reported that management was unconcerned about the lab’s progress; did not provide needed resources (e.g., relieving the reinvention team of their usual duties); and did not direct field offices to participate in the lab. Survey respondents also related examples of resistance to their reinvention efforts from nonmanagerial staff in headquarters. One respondent said that the lab was set up in such a manner that staff members at headquarters, whom he said were threatened by the lab’s goals, could obstruct its progress. Another respondent said that staff at her facility had been “frustrated with the NPR experience” and questioned the point of the labs. She said that the lab staff had submitted a proposal to their headquarters that would have allowed them to buy fuel oil from a local supplier at a cheaper price than from their in-house supplier. The headquarters staff sought feedback on the idea from their in-house supplier, who naturally objected to the proposal. On the basis of this response, the headquarters staff denied the request. “We will transform the federal government only if our actions—and the Reinvention Teams and Labs now in place in every department—succeed in planting a seed. That seed will sprout only if we create a process of ongoing change that branches outward from the work we have already done.” If the reinvention labs are to “plant seeds” for organizational change, communication of information about what they have tried and how it has worked is essential. Therefore, we asked lab officials about communication with other reinvention labs and with the NPR task force. The respondents who offered an opinion indicated that substantial communication among labs or between the labs and the NPR task force was relatively rare. Only 11 percent of the respondents said that their labs had communicated with other labs to a “great” or “very great” extent, and only 18 percent reported that level of communication between their labs and the NPR task force. Twenty-three percent of the respondents said they had communicated to a “moderate” extent with other labs and with the NPR task force; the stage of lab development had little effect on their responses. Officials in fully implemented labs were no more likely to have communicated with their colleagues in other labs or with NPR staff than officials in labs that had not gotten to that stage of development. Nevertheless, over 70 percent of the respondents who said they had at least some communication with other labs said it was helpful to the development of their labs. About 68 percent of the respondents reporting that level of communication with NPR staff said it was helpful. For example, one respondent said that DOD held a reinvention lab conference in March 1995 that allowed the agency’s labs to share experiences and exchange ideas. According to lab officials from DOE’s Hanford site reinvention lab, NPR staff assisted them in seeking a waiver enabling DOE to privatize some laboratory services. (See app. V.) There were clear differences in the responses in this area between DOD lab officials and respondents for the other labs. Where over two-thirds of the DOD respondents said that they had at least some communication with other labs, only half of the non-DOD labs indicated this level of lab-to-lab communication. Similarly, DOD lab officials were much more likely to report that this communication had aided in the development of their labs (83 percent) than respondents from other agencies (59 percent). Interestingly, DOD and non-DOD labs did not differ in the degree to which they communicated with the NPR task force (62 percent for both responses) or the extent to which they believed that the communication had assisted in their labs’ development (62 percent for DOD labs versus 60 percent for non-DOD labs). As noted in chapter 1, many of the reinvention labs were initiated or were being implemented at a time when federal agencies were being reduced in size. The September 1995 NPR report estimated that at least 160,000 positions had been eliminated from the federal workforce since early 1993. Because they were operating in this environment, we asked the survey respondents whether agency downsizing had a positive, negative, or other effect on their reinvention labs. (The respondents were allowed to check multiple categories.) About 44 percent of the respondents reported that downsizing had a positive effect on their labs, but about 53 percent reported that downsizing had a negative effect. The respondents mentioned such negative effects of downsizing as slower implementation of lab efforts; loss of corporate memory; and morale problems (e.g., fear, stress, and uncertainty) that resulted in less interest in and support of management reforms and less risk-taking. In addition, some respondents said that downsizing had jeopardized their labs’ ability to achieve desired outcomes and raised concerns that decreasing manpower, coupled with the same or increasing work requirements, would reduce the amount of time respondents had available to focus on lab activities. The respondents who said downsizing had a positive effect on their labs commonly indicated that it was a catalyst for real change in their agencies. Several of the respondents noted that downsizing forced management and staff to rethink agency operations, support reforms, adopt NPR efforts and labs, and work more collaboratively. A few of these respondents also noted that downsizing led to greater innovation and creativity. Five other respondents said that their labs benefited from the downsizing of other agencies. For example, one lab reported that reductions in other agencies’ contract administration staff increased interest in the contract administration services that the lab was marketing. Thirty-three percent of the respondents reported both positive and negative effects from agency downsizing. For example, one respondent said that although downsizing had forced staff to consider radical changes that would have otherwise been rejected, it had also reduced the amount of staff, time, and resources available for concentrating on making these improvements. We also asked the survey respondents what effect, if any, the implementation of GPRA and the agency restructuring initiative in the second phase of the NPR effort (NPR II) had on their reinvention labs. Compared to their views on downsizing, the respondents were less clear about the effects of GPRA implementation and NPR II’s restructuring on their labs. They were more likely to say that they did not know the effects of GPRA or NPR II on their labs, perhaps because these reforms had not been fully implemented at the time of our survey. However, the survey respondents were much more likely to indicate that GPRA had a positive effect on the development of their labs (33 percent) than a negative effect (6 percent). For example, they said that GPRA • complemented and reinforced their labs’ ongoing reinvention efforts; • promoted the development of performance measures and results-based management systems that were a part of their labs’ goals; forced their organization to focus on performance, redefining mission, corporate goals, and objectives; • compelled management to think about how to integrate various management reform legislation, such as the Federal Managers’ Financial Integrity Act of 1982 and the Chief Financial Officers Act of 1990, with the reinvention labs; and • provided a driving force for interest in, and design of, a new operations evaluation process for the lab. At least one of the labs was also participating in a GPRA pilot program. As a pilot site, VA’s New York Regional Office’s claims processing lab developed a new system of measures, including one that VA officials said enabled teams to determine how productive they were by comparing the dollar value of the claims they processed to the relative salary of the team. (See app. XIII.) Officials from six labs said that developing performance measures and complying with GPRA requirements were integral parts of their reinvention efforts. Labs’ performance-based reform initiatives included (1) developing GPRA performance measures and defining a matrix program of performance-based management techniques, (2) building GPRA requirements into the lab’s strategic planning effort, and (3) integrating planning and performance measurement requirements into a standard agencywide system. However, two survey respondents said that the implementation of GPRA had little effect on their labs because they were already developing and using performance measures. Less than 6 percent of the respondents said that GPRA had a negative effect on their reinvention labs. These respondents typically said that GPRA was perceived as “busy work” or as having increased the staff’s workload. In contrast to the respondents’ comments on GPRA, the proportion of positive and negative responses about NPR II restructuring was relatively close—31 and 24 percent, respectively. One respondent said that agency restructuring had resulted in greater cooperation between his lab and OPM on personnel issues. Another respondent said that restructuring provided the framework to take the lab initiative to the next level of improvement. Yet another respondent said that officials at his lab viewed NPR II restructuring as basically a budget exercise. In their comments, the survey respondents also mentioned three other barriers to the development of their reinvention labs—lack of interagency coordination, existing legislation, and organizational culture. Several respondents provided examples of the difficulties they experienced in undertaking management reforms that crossed agency boundaries, even when those agencies are within the same department. Other respondents said that existing statutory requirements, which would require an act of Congress to change, had hindered their labs’ performance. Still other survey respondents said that implementation of the reforms in the lab required changing the organizational culture within their agencies—that is, the underlying assumptions, beliefs, values, practices, and expectations of employees and managers. Many governmental functions are performed by more than one agency or level of government. In some cases, the federal government is addressing very broad issues, such as environmental degradation or the need for job training, that fall within the missions of several agencies. Therefore, similar programs have been established in different federal agencies. Other federal programs require the cooperation of state and local governments. Federal agencies also have similar administrative responsibilities (e.g., personnel, procurement, and contracting) that require the provision of resources in each agency to fulfill those functions. In all of these areas, opportunities exist for greater cooperation and sharing of resources. As noted in chapter 2, at the time of our survey, 54 labs were attempting to develop partnerships with other levels of government, labor organizations, contractors, and/or customers. Other labs were attempting to consolidate activities among different federal organizations. The survey respondents provided several examples of the difficulties involved in enacting management reforms across agency boundaries. For example, one respondent said that statutes requiring the use of different contracting procedures in different agencies were a significant barrier to his lab’s goal of consolidating multiagency programs. The respondent said that one agency had to use competition when awarding contracts, while other agencies were required to set aside a percentage of contract awards for minority contractors. Officials at the Commerce Department’s Boulder reinvention lab said that they established a multiagency team to address the issue of funding for administrative services. However, they said the team was ultimately disbanded because it could not reach consensus on proposed funding alternatives. According to one lab official, the team lacked sufficient authority needed to push a proposal forward. (See app. III.) Other difficulties that the lab officials described in such multiagency efforts included (1) nonparticipation in or withdrawal from the lab by some relevant agencies, (2) resistance from top management at one or more of the agencies, and (3) failure by some agencies to send staff to NPR-related training courses. Some of the survey respondents said certain statutory requirements had a negative effect on their labs. For example, some respondents mentioned federal contracting laws as a constraint on reinvention labs. In one case, a lab official said it was difficult to determine the extent of the lab’s authority to reform contracting procedures because of the myriad of different contracting statutes. Another respondent noted that the FAR was designed to prevent close relationships from developing between federal contracting units and contractors. The respondent said this FAR-required “arms length” relationship prevented sharing costs and resources with contractors and was not conducive to cost savings and cycle time reductions. Lab officials at VA’s Clement J. Zablocki Medical Center in Milwaukee provided an interesting example of how such constraints affected the lab’s performance. The officials said VA classifies eyeglasses as a prosthetic device, and statutorily based regulations state that prosthetics can be provided only to veterans with nonservice-related medical conditions who have been admitted to the hospital. Therefore, patients having outpatient cataract surgery must be admitted to the hospital for a 2-day stay in order to receive corrective eyeglasses. Medical center officials said this is an unnecessary and costly requirement, and they have sought a waiver from the regulation. According to the President, one of the goals of the reinvention effort is changing the culture of the national bureaucracy “away from complacency and entitlement toward initiative and empowerment.” A 1993 NPR report stated that traditional cultural values in the federal government resist change, preserve mistrust and control, and operate within a rigid and hierarchical structure. The report also said that this segmented system creates artificial organizational boundaries that separate staff within and among agencies that work on related problems. Several lab officials indicated that this traditional culture had hindered the process of change in their organizations. In an attempt to change their units’ culture, several organizations combined organizational restructuring with changes in individual performance measurement systems as a way to reinforce new employee behaviors. This type of organizational restructuring typically involved moving from hierarchical, specialized departments that were responsible for the performance of a single component of a work process (commonly known as stovepipes) to multidisciplinary work teams responsible for the performance of an entire process. To ensure that incentive systems were aligned with restructured operations, labs were evaluating the use of self-directed work teams by • creating business contracts with built-in product delivery and customer satisfaction targets, with both the customer and the team evaluating the team’s overall performance and each member’s contribution; • having the team leader conduct evaluations rather than the management of the functional units; and • creating an award system that ties group awards to the team’s contribution to the achievement of the agency’s goals. By creating work teams within their organizations, these labs have tried to address the Vice President’s goal to change the culture of the federal government. The collection and analysis of performance data are key elements in changing the way the federal government operates, particularly when those changes are initiated as pilot projects. At the most basic level, performance data are needed to determine whether the changes being implemented are producing the expected results. If the data indicate that the changes are successful and merit wider implementation, performance data can be used to make a compelling argument for changing what may be long-standing policies and practices. Because reinvention labs are intended to explore new ways of accomplishing agencies’ existing missions, often on a small scale before broader implementation begins, data about the labs’ performance can be crucial to the labs’ long-range success. Without such data, decisionmakers will not know whether the changes are an improvement over existing practices. Also, without performance data, lab officials will find it difficult to obtain support for full-scale implementation within their agency or for diffusion beyond their agency to other federal entities. The survey respondents frequently said their labs were collecting various types of performance data. Those labs not collecting data were commonly described as not being sufficiently developed to do so. Where data were collected, the respondents indicated that it showed the labs were improving productivity and customer service. However, the respondents also frequently said that their labs did not have pre-lab data against which post-lab data could be compared. Some respondents also indicated other problems with their labs’ data collection efforts. As figure 4.1 shows, over two-thirds of the respondents said that their labs had collected or were collecting some type of performance data. Even those respondents who said data were not being collected generally recognized its importance. Over 80 percent said their labs planned to gather such data in the future. We asked the survey respondents who said their labs were collecting performance data to identify the kinds of data being collected from the following categories: (1) informal, ad hoc comments from staff or customers; (2) customer opinion survey data; (3) staff opinion survey data; (4) output data reflecting the unit’s level of activity or effort (e.g., the number of claims processed); (5) outcome data indicating the unit’s results, effects, or program impacts (e.g., changes in infant mortality rates); and/or (6) some other kind of data. (Survey respondents were allowed to identify more than one type of data for their labs.) The respondents most commonly said their labs were collecting data on the units’ outputs (77 percent) and/or were collecting informal comments from staff or customers (69 percent). Other frequent responses were customer opinion survey data (57 percent), outcome data (52 percent), and staff opinion survey data (40 percent). Many of the labs (88 percent) reported collecting more than one type of data. Of those respondents who said their labs were not collecting performance data, over three-fourths said that it was too early in the reinvention process to do so. Analysis of the labs’ stage of development and whether or not they collected data supports the lab officials’ opinion that it was too early in the reinvention process to be collecting performance data. As shown in figure 4.2, nearly 90 percent of the labs that were at least fully implemented at the lab site said they had collected or were collecting performance data. In contrast, only about half of the labs in the planning or beginning implementation stages of development had collected or were collecting such data. A more detailed breakdown of the responses from fully implemented labs further demonstrates this relationship between stage of development and data collection. As figure 4.3 shows, although more than three-fourths of the labs implemented at only the lab site were collecting performance data, over 90 percent of the labs implemented at the lab site and beyond were collecting such data. Therefore, the more developed the lab, the more likely that it would have collected performance data. Although most of the survey respondents indicated their labs were collecting performance data, 14 percent of the respondents who said their labs were not collecting such data said they did not do so because gathering performance data was not seen as essential to their labs’ efforts. For example, lab officials from GSA’s Mid-Atlantic Regional Office and the Commerce Department’s Boulder reinvention lab said that efforts to measure “obvious improvements” were unnecessary. One official from the Boulder lab said that data collection efforts should be concentrated on those changes in which outcomes are more dubious. Other officials from this lab said that they had planned to use the agency’s Inspector General to monitor the lab’s progress, but the Inspector General told them that many of the lab’s changes were based on common sense and, therefore, did not require measurement to prove their worthiness. (See app. III.) Another 12 percent of the respondents said that they had not collected performance data because they had experienced difficulty in identifying and/or developing appropriate performance measures. To be valuable, performance data must not only be collected but also be used by decisionmakers to assess the changes being made in agencies’ operations. However, not all of the data the labs collected appear to have been used. For example, officials from USDA’s lab reinventing the baggage inspection operations in Miami said that they had collected data that could have been used to judge the lab’s performance, but the data were never used by anyone in the agency or the lab for that purpose. (See app. II.) Eighty-two percent of the respondents who said their labs had collected or were collecting performance data said that the data had allowed them to reach conclusions regarding the performance of their labs. Of these respondents who offered an opinion, 98 percent reported improved customer service, nearly 92 percent noted improved productivity in their units, and 84 percent said their labs had improved staff morale. Examples of customer service improvements follow: • VA’s New York Regional Office claims processing lab said that the average amount of time veterans had to wait before being seen for an interview had been reduced from about 20 minutes before the lab to less than 3 minutes after the lab was established. Lab officials also said that VA employees had greater control and more authority and found their jobs much more satisfying. (See app. XIII.) • VA’s reinvention lab at the Zablocki Medical Center in Milwaukee said two surveys—one of physicians and the other of patients and their family members—indicated that customer satisfaction had improved as a result of the lab’s effort to coordinate veterans’ outpatient and inpatient care by teaming social workers with primary care physicians. (See app. XII.) • DOE’s reinvention lab at the Hanford site in Washington State said that the lab had reduced the safeguard and security budget by $29 million over a 4-year period by changing the installation’s security operations from a large paramilitary organization that supported a national defense mission to an industrial-style organization that supports environmental cleanup. (See app. V.) • HUD’s reinvention lab in Chicago, Milwaukee, and Cleveland said that by developing partnerships with public housing authorities the lab had improved the satisfaction of the public housing residents. Lab officials also said that an overall measure of the public housing authorities’ management performance in such areas as rent collected, condition of the housing units, and operating reserve had improved since the lab was initiated. (See app. VIII.) • DLA’s lab said the lab reduced the agency’s overall pharmaceutical inventories by $48.6 million and achieved similar inventory reductions and cost savings at DOD medical facilities. (See app. IV.) Respondents frequently said that performance data allowed them to conclude that their labs had improved units’ productivity, customer satisfaction, and staff morale. However, conclusively documenting these improvements may be very difficult. As figure 4.4 indicates, many of the respondents who said their labs were collecting performance data did not collect similar types of data before the start of the lab to serve as a baseline for documenting the labs’ effects. The most common forms of pre-lab performance data (baseline data) that respondents indicated existed concerned a unit’s outputs (53 percent of the respondents) and informal comments (57 percent). Labs reported that they were least likely to have such data on customer (24 percent) and staff (17 percent) opinions. At the time of our survey, 26 agencies and other federal entities had designated a total of 185 reinvention labs in various parts of the country. The survey respondents indicated that the labs generally were established to do what the Vice President suggested in his April 1993 letter to federal departments and agencies—improve customer service; address specific problems; and, ultimately, improve the operation of federal agencies. Because many of the labs had not been implemented at the time of our review, it is too early to tell whether they will accomplish these goals. Even for the labs that the respondents said had been fully implemented, it may take years before it can be determined whether the changes will have a long-lasting effect on federal agencies beyond the lab site. Also, because there is not a specific definition of a reinvention lab or guidance from either the NPR task force or OMB as to how labs should operate, few clear criteria exist against which to judge the labs’ performance. Nevertheless, some preliminary observations about the labs are possible based on comments the Vice President and others have made about the labs and the information developed during this review. For example, the Vice President said that the labs should ideally be initiated where the government serves the public in a highly visible way. Although virtually all of the survey respondents indicated that improving customer service was a primary goal of their labs, they did not always define their labs’ customers as the public. In fact, lab officials most commonly viewed their labs’ customers as other governmental organizations, and, for some of the labs, a government organization was their only customer. Although the linkage of these labs to the public may not have been as direct as the Vice President envisioned, the public or the agency’s constituency appeared to be at least indirectly served in virtually all of the labs. Although the survey respondents indicated that the labs’ changes represented a substantially different mode of operation, the scope of the reforms being developed in the labs was relatively narrow compared to the sweeping changes contemplated by GPRA, the NPR II agency-restructuring recommendations, and the congressional proposals to consolidate agencies’ functions or eliminate agencies entirely. However, the labs’ comparatively narrow scope is a natural consequence of the Vice President’s charge that they “reengineer work processes.” Agencies and employees were not asked to suggest macro-level changes, such as whether entire agencies or programs should be abolished or whether multiple agencies should be merged into a single structure. Ultimately, though, the diffusion and widespread adoption of the labs’ reengineering proposals could lead to the “fundamental culture change” that the Vice President envisioned in 1993. At the beginning of the lab effort, a number of observers indicated that a key factor in the success of the effort would be the labs’ ability to obtain waivers from federal regulations. Although the respondents said many labs sought and received regulatory waivers, a large number of the efforts were able to be implemented without such waivers. Some lab officials said they believed waivers would be needed, but they later discovered that they already had the authority needed to change their work processes. Although some impediments to the labs were clearly real, the experiences of those officials suggest that at least some barriers to organizational change may be more a function of perception than reality. Most of the survey respondents said they were collecting performance data to measure the effect of their labs’ reinvented work processes. However, some of the respondents’ comments raised questions about their commitment to measuring performance or the quality of the data being collected. Some lab officials said that either they or other agency officials did not believe that the collection of performance data was necessary or worthwhile. Other lab officials said that they had difficulty developing measures of performance or that data had been collected but had not been used by decisionmakers. One of the most common types of data reportedly being collected by the labs was informal comments from customers or staff—anecdotal data that are not measurable and, therefore, may not be convincing to skeptics of the reinvention process. Of particular concern to us are the labs that were reportedly collecting data about their reinvention efforts but had not collected similar types of data before the start of their labs. Without such pre-lab data, lab officials have no baseline for documenting a lab’s effects and therefore will find it difficult, if not impossible, to reach persuasive conclusions about the lab’s effects. The absence of both pre- and post-lab data will also make it difficult to support expanding a lab’s changes to the rest of its agency or to other organizations. Development of pre-lab performance measures is particularly important for the substantial number of labs reportedly still in the planning stage. Nevertheless, the reinvention lab effort has produced hundreds of ideas to reengineer work processes and improve agencies’ performance—ideas drawn from employees with hands-on experience in operating government programs. Many of the labs are addressing issues that are at the cutting edge of government management, such as how agencies can use technology to improve their operations; how they can be more self-sufficient in an era of tight budgetary resources; and how agencies can work in partnership with other agencies, other levels of government, or the private sector to solve problems. This progress notwithstanding, even more innovations are possible in these and other areas as agencies review and rethink their existing work processes. The labs we surveyed were at varying stages of development. About half had not been fully implemented at the lab sites and were still in the planning or developmental stages. The rest of the labs had been fully implemented at the lab sites, and some had proven that the innovations being tested can save money, improve service, and/or increase organizational productivity. However, relatively few of the labs’ proposals had been implemented beyond the original lab site. The types of assistance the labs need depend on their stage of development. Labs that are in the planning or developmental stages need the support, encouragement, and, at times, the protection that a “change agent” in a position of influence can provide. Governmentwide, the Vice President and the NPR task force have attempted to perform that role. There have also been change agents within particular agencies that have encouraged and supported the labs’ development. Labs that have been fully implemented, particularly those that have demonstrated ways to save money and/or improve federal operations, need a different type of assistance if the ideas they represent are to spread beyond the lab sites. Nonlab organizations both within the labs’ agencies and in other agencies need to become aware of the labs, recognize the applicability and value of the ideas the labs represent to their own organizations, and learn from the labs’ experiences. As the Vice President said, for the labs to achieve their full potential they “will need to share what they learn and forge alliances for change.” The real value of the labs will be realized only when the operational improvements they initiated, tested, and validated achieve wider adoption. Also, by learning from the labs’ experiences, other organizations can avoid the pitfalls that some of the labs experienced. Sharing this information will keep other organizations from having to “reinvent the wheel” as they reinvent their work processes. If the changes the labs represent end at the lab sites, a valuable resource will have been wasted. Therefore, communication about the labs is crucial to the long-term success of this part of the overall reinvention effort. However, the survey respondents indicated that relatively few labs have had substantial communication either with other labs or with the NPR task force. Also, although it has encouraged the labs’ development and made certain information available about them, the NPR task force has not actively solicited information from the labs, has encouraged agencies to focus on reinventing rather than reporting, and has not systematically contacted the labs to provide them with information or direction. As a result, the NPR task force was not able to provide us with an accurate listing of all of the labs. The task force’s “hands-off” approach to the reinvention lab effort was a conscious decision by NPR officials not to micromanage the labs and impose a top-down “command and control” structure. This approach, while appropriate to encourage and empower employees and agencies to find the solutions they believe most appropriate to reengineer their work processes, may not be the best strategy for moving the labs’ results beyond their experimental environments. Furthermore, there is no certainty that the NPR task force will still be in existence when some of the labs reach maturity. Therefore, we believe that some type of information “clearinghouse,” placed in a relatively stable environment, is needed to allow other organizations to become aware of the labs and to learn about the labs’ experiences. The clearinghouse could, among other things, provide information and guidance to labs on the development of appropriate performance measures, including baseline data against which the labs’ performance could be judged. A number of federal organizations could conceivably perform this clearinghouse role. For example, OMB’s responsibility for providing management leadership across the executive branch makes it a candidate to serve as the clearinghouse. Other possible candidates include OPM, GSA, the President’s Management Council, or an executive agency interested in tracking innovations. We recommend that the Director of OMB ensure that a clearinghouse of information about the labs be established. Working with the NPR task force, the Director should identify which agency or other federal entity can effectively serve as that clearinghouse. The clearinghouse should contain information that identifies the location of each lab, the issues being addressed, points of contact for further information about the lab, and any performance information demonstrating the lab’s results. We provided a draft of this report to the Vice President and the OMB Director for their review and comment. On January 17, 1996, we met with the Senior Policy Advisor to the Vice President for NPR issues and the Deputy Director of the NPR task force. On January 22, 1996, we met with OMB’s Deputy Director for Management. All of the officials indicated that the report was generally accurate, interesting, and helpful. The OMB and NPR Deputy Directors said the report was the most comprehensive analysis of the reinvention labs to date. Certain technical changes the officials suggested were incorporated into the report as appropriate. In the draft, we recommended that OMB serve as the clearinghouse for information about the labs. All of the officials expressed concerns about this recommendation. The Senior Policy Advisor and the NPR Deputy Director were somewhat concerned that the recommendation might be read as implying that OMB, rather than NPR, should have had responsibility for initiating and promoting reinvention labs. They pointed out that OMB’s historical role, its budget responsibilities, and its statutory management responsibilities compete with its role as a “change agent” fostering innovation. We explained that our recommendation was intended to emphasize OMB’s responsibility to facilitate the dissemination of work process innovations beyond the lab sites, not make them change agents responsible for initiating the labs. The Senior Policy Advisor and the Deputy Director agreed that this innovation dissemination function is important and agreed that OMB was one place where this responsibility could be placed. The OMB Deputy Director for Management suggested that the recommendation be changed to allow for options other than OMB itself as the clearinghouse. He said that although OMB has a leadership role to play in this regard, OMB may not be the best candidate to collect and provide information about the labs. Other possible candidates, he said, include the President’s Management Council, other central management agencies, and the Chief Financial Officers Council. We agreed to change the recommendation to state that the OMB Director should ensure that a clearinghouse is established and, working with the NPR task force, should identify the appropriate site for the clearinghouse.
GAO reviewed the National Performance Review's (NPR) initiative to establish reinvention labs in federal departments and agencies, focusing on: (1) the labs' developmental status; (2) factors that hindered or assisted their development; (3) whether the labs were collecting performance data; and (4) whether the labs have achieved any results. GAO found that: (1) more than 2 dozen federal agencies and other entities have developed a total of 185 reinvention labs; (2) the labs deal with a variety of issues, from personnel management to improving operations using technology; (3) almost all of the labs consider customer service as their primary goal, and consider other government organizations to be customers; (4) while labs considered management support to be important to lab development, the use of regulatory waivers and communication about the labs' progress were rarely needed or used; (5) other federal reform efforts, such as downsizing and the implementation of the Government Performance and Results Act, had both positive and negative effects on the labs' development; (6) labs experienced difficulties in sustaining efforts that crossed agency boundaries or challenged agencies' existing cultures; (7) over two-thirds of the labs had collected some type of performance data, ranging from information on unit outputs to informal comments from staff and customers, but some lab administrators refused to collect performance data because they believed it was unnecessary or not worthwhile; (8) the performance data are inconclusive, since there are no previous data for comparison and the nature of the data is subjective; (9) the labs have yielded results by improving customer service, increasing unit productivity and employee morale, and reducing costs at some federal sites; and (10) the value of the labs will be realized only when lab efforts proven to be effective spread beyond the lab sites.
In accordance with the Federal Records Act and NARA’s implementing regulations for records management and retention, MCC is responsible for managing the records that it generates. MCC has established policies and issued guidance intended to ensure that the records generated by the governments receiving compact and threshold assistance are properly identified and transferred to MCC for storage and management. The Federal Records Act, as amended, requires each federal agency to make and preserve records that (1) document the organization, functions, policies, decisions, procedures, and essential transactions of the agency; and (2) provide the information necessary to protect the legal and financial rights of the government and of persons directly affected by the agency’s activities. Such records must be managed and preserved in accordance with the act’s provisions. To ensure that they have appropriate systems for managing and preserving their records, the act requires agencies to develop records management programs. These programs are intended, among other things, to provide for accurate and complete documentation of the policies and transactions of each federal agency, to control the quality and quantity of records they produce, and to provide for judicious preservation and disposal of federal records. A records management program identifies records and sources of records and provides records management guidance, including agency-specific recordkeeping practices that establish what records need to be created to conduct agency business, among other things. Under the Federal Records Act, NARA has general responsibilities for oversight of agencies’ federal records management. These responsibilities include issuing guidance for records management; working with agencies to implement effective controls over the creation, maintenance, and use of records in the conduct of agency business; providing oversight of agencies’ records management programs; approving the disposition (destruction or preservation) of records; and providing storage facilities for agency records on a fee-for-service basis. NARA has issued regulations requiring that records be effectively managed throughout their life cycle, including records creation and receipt, maintenance and use, and disposition. One key records management process is scheduling, the means by which NARA and agencies identify federal records and determine timeframes for disposition. Creating records schedules involves identifying and inventorying records, appraising their value, determining whether they are temporary or permanent, and determining how long they should be kept before they are destroyed or turned over to NARA for archiving. Scheduling records requires agencies to invest time and resources to analyze the information that an agency receives, produces, and uses to fulfill its mission. Such an analysis allows an agency to set up processes and structures to associate records with schedules and other information to help it find and use records during their useful lives and dispose of those no longer needed. Scheduling involves broad categories of records rather than individual documents or file folders. Since 2009, NARA has required federal agencies to complete an annual self-assessment of their records management practices, to determine whether the agencies are compliant with statutory and regulatory records management requirements. The 2012 self-assessment survey called for agencies to evaluate themselves in four areas: (1) records management activities, (2) oversight and compliance, (3) records disposition, and (4) electronic records. NARA scores the self-assessments, and the accompanying agency documentation, and uses the scores to categorize each agency as low, moderate, or high risk in terms of compliance with federal regulations. Beginning in 2006, MCC established a records and information management program and subsequently established guidelines for handling compact management records and other compact-related information. However, MCC has not created a policy for—or conducted— periodic reviews of the extent to which it has received the compact management records that it requires MCAs to provide to MCC for storage. In 2012, MCC’s score on the NARA survey placed the agency in the moderate risk category—an average rating—for compliance with federal requirements. MCC established its records and information management program in 2006. The program’s stated objectives, according to the 2011 version of its Records and Information Management Policy, are to create, maintain, and preserve adequate and proper documentation of its policies, transactions, and decisions; ensure the security and integrity of MCC’s federal records, including the safeguarding of records against unauthorized access or disposition; and prevent the removal of records and control the removal of other materials from the agency. MCC’s Records and Information Management Policy defines “federal record” consistently with the Federal Records Act’s definition of “record.” According to the policy, MCC catalogues its federal records into four major series, based on the records’ functions. Administrative: Records commonly found at any federal agency, such as accounting and finance files, and budget, personnel, and procurement files. Governance: Records related to the Millennium Challenge Act of 2003, authorities, laws, and legislation, such as Board of Directors meeting minutes and resolutions, legal opinions, and ethics program records. Communications: Exchanges with external entities, such as MCC’s annual report, congressional notifications, press releases, and official speeches, among other things. Millennium Challenge Account Assistance: MCC mission development, implementation, oversight, results, and closeout information pertaining to threshold programs and compacts. This category also includes compact management records, which are generated at least in part by the MCAs. MCC provides guidance regarding the maintenance of compact management records and the retention and storage of compact-related information. In 2007, MCC issued Policy and Procedures for Compact- Related Federal Recordkeeping, which was updated in 2012. The policy outlines specific policies and procedures regarding compact management records—which MCC refers to as a subset of federal records—and other compact-related information. The policy also includes a list specifying the types of documents that MCC classifies as compact management records. The policy and procedures apply regardless of whether the records and other compact-related information are created by MCC staff, partner governments, MCA entities, contractors, or other parties. Maintenance of compact management records. According to MCC’s Policy and Procedures for Compact-Related Federal Recordkeeping, all information defined as a compact management record must be maintained at MCC headquarters during compact development and implementation and after compact closure. For example, under the policy, the following monitoring and evaluation documents are classified as compact management records, to be maintained at headquarters: indicator tracking tables, monitoring and evaluation plans and revisions, reviews and final impact evaluations, and data quality reviews. Retention and storage of compact-related information. MCC’s Policy and Procedures for Compact-Related Federal Recordkeeping further states that the partner governments must retain, for at least 5 years after compact closure, types of information that are not defined as records but are important to the implementation and closure of compacts. The policy specifies, as examples of such information, (1) documents to support audits by MCC’s Office of the Inspector General and GAO and (2) program evaluation documents to support ongoing analysis of MCC assistance. In another policy document, Program Closure Guidelines, MCC also requires MCAs to provide to MCC certain compact-related information for storage at MCC headquarters. For example, MCAs are required to provide the following information related to compact monitoring and evaluation: all MCC-funded survey data sets and supporting materials, such as questionnaires, enumerator field guides, data entry manuals, data dictionaries, and final reports; other analyses; evaluations; and data quality reviews and special studies that were funded through the compact’s monitoring and evaluation budget. Table 1 describes compact management records and other compact- related information. Type of information Compact management records Description A subset of the Millennium Challenge Account Assistance series of federal records that must be maintained by MCC headquarters during implementation and after compact closure. To be retained by partner governments for at least 5 years after compact closure. Documents that may be needed to support future audits or analysis of MCC assistance. To be provided by MCAs for storage by MCC headquarters. Documents or copies of documents that must be provided to MCC during implementation or at compact closure, such as survey data sets and supporting materials. MCC’s Policy and Procedures for Compact-Related Federal Recordkeeping assigns the responsibility for ensuring that the country’s compact management records are transmitted and received at MCC headquarters to the MCC Resident Country Director serving in each partner country. The policy states that MCC has the responsibility to ensure that MCAs are taking reasonable steps to meet records management requirements. The policy also states that MCC is responsible for ensuring that the partners understand (1) what is covered by both compact management records and other compact-related information and (2) that MCC or a U.S. government audit, legal, or oversight entity may need to have access to such information for at least 5 years after the compact end date. MCC policy does not call for, and MCC has not performed, reviews of the extent to which it has received the compact management records that it requires MCAs to provide to MCC for storage. According to Standards for Internal Control in the Federal Government, a federal agency should have in place control activities—that is, policies, procedures, techniques, and mechanisms, including reviews of performance—to help ensure that management’s directives are carried out. MCC’s Policy and Procedures for Compact-Related Federal Recordkeeping assigns to the MCC Resident Country Director in each partner country the responsibility for ensuring the transmittal and receipt of the country’s compact management records at MCC headquarters. However, MCC policy does not require periodic reviews of the records received from the MCAs to ensure that all required records have been transferred. In addition, MCC’s Records Management Officer stated that MCC has not reviewed the compact management records it has received, for the following reasons: (1) the first compacts ended only recently, and (2) the records management program has limited resources. However, of the 11 compacts that have closed or been terminated, 5 ended in 2011 and 2 ended in 2010. Without periodically reviewing the compact management records it receives from the MCAs, MCC cannot be sure that it is meeting the Federal Records Act’s requirement that it preserve all records documenting its functions, activities, decisions, and other important transactions. In 2012, MCC received a revised score of 77 out of 100 on NARA’s self- assessment survey, which placed MCC in the moderate risk category in terms of compliance with federal requirements (see table 2). A NARA official characterized this score as an “average rating” for federal agencies. In the previous 3 years, MCC received the following scores: 92 (2009), 83 (2010), and 76 (2011) (see app. II for more information.) MCC’s Records Management Officer also stated that meeting NARA requirements, especially for electronic records, is difficult for small federal agencies with limited resources and that MCC, in conjunction with other small agencies, has appealed to NARA for assistance. For the five closed compacts that we reviewed—MCC’s compacts with Armenia, Benin, El Salvador, Ghana, and Mali—the MCAs provided varying levels of detail about their plans for retaining compact-related information to address MCC requirements. In addition, the five partner governments showed varying capacity to provide the documents that we asked MCC to retrieve. For the five compacts that we reviewed, the MCAs provided varying levels of detail about their plans for retaining compact-related information. MCC’s Program Closure Guidelines instruct accountable entities to develop program closure plans describing their strategy for retaining and storing compact-related information. The guidelines state that each accountable entity should provide the following three items for MCC approval prior to the compact end date: a list of the types of documents the partner government will retain, a document retention schedule, and a brief description of the form and manner in which the documents will be stored. MCC’s compact closure guidelines do not provide a sample document retention schedule specifying standard types of compact-related information that most compacts would need to retain or provide. All five program closure plans that we reviewed contained some discussion of filing and storing documents, but each MCA addressed the guidelines’ three requirements differently. Such variation in approaches to scheduling and storing compact documentation will make it more difficult for MCC to verify that standard compact information is being retained in all partner countries after the compacts have closed. Types of documents to be retained. The program closure plans for the Armenia, Ghana, and Mali compacts specified that the respective partner governments would retain all compact-related information. The program closure plan for the Benin compact did not list the types of documents that the government would retain but stated that the MCA would provide further information in the document retention schedule. The program closure plan for the El Salvador compact stated that the government, through a contractor, would retain original files related to personnel, projects, procurement, finance, monitoring and evaluation, and studies. Document retention schedule. The MCAs’ document retention schedules also varied. Armenia and Benin did not provide document retention schedules. El Salvador provided a comprehensive listing, by category, of all documents to be retained; however, it provided this list for the purposes of our review in April 2013, after the compact end date. Ghana’s schedule, submitted after the compact end date but within the 3- month compact closure period, specified document types to be retained. Mali provided an undated printout of its electronic file system. According to MCC officials, the disparity among the MCAs’ document retention schedules stemmed from insufficiently specific guidance provided by MCC. Form and manner of document storage. The MCAs’ program closure plans specified varying forms and manners of storage for compact-related information after compact closure. According to the plans: Armenia will store the documents at three different government agencies: the state archives, the Ministry of Transport and Communications, and the Foreign Financing Projects Management Center (a foreign donor coordination unit); Benin will store the documents at its national archives; El Salvador has made a contractor responsible for the safekeeping of compact-related files and documents; Ghana’s MCA will continue as a foreign donor coordination unit after compact closure and will retain all MCC documents; and Mali will store compact-related information at the Office of the Secretary General of the President (the “Office of the Segal”), which was the office of the principal government representative under the compact. Our test of MCC’s ability to retrieve compact-related information from the five countries produced varying results that depended on the stability of the governments. Four of the governments provided all or most of the documents we requested. In contrast, Mali’s government, which is in transition, provided none of the requested documents. Figure 1 displays the test results. Owing to its policy of relying on partner governments to retain and store compact-related information, MCC lost access to this information for 2 of the 11 closed compacts when, because of political turmoil, it terminated its compacts with Mali and Madagascar. Mali’s government has been in transition since March 2012, when the administration at that time was overthrown. According to MCC, the transitional government in Mali will establish an office to handle post-compact issues but has not provided a point of contact. As a result, MCC officials reported that although they believe the information related to the Mali and Madagascar compacts exists and has been maintained in an organized fashion, they are currently unable to access the requested documents. MCC has previously noted that political turmoil in Madagascar, whose government was overthrown in 2009, impeded MCC’s ability to access documents. MCC officials stated that they considered the response rate to our test to be good, particularly since the people retrieving the documents were not necessarily the same people who created or stored them. Previously, MCC has stated that its ability to retrieve documents from partner countries is reliant on its ability to access key individuals. MCC officials further stated that the difference in document return rates among the four countries that provided all or most of the requested documents may have been due to our test methodology. According to these officials, some of the documents we requested were not “critical path” documents, and our descriptions of the documents may not have been specific enough to allow the partner countries to identify them. However, all of the documents we requested were used in audits by the USAID’s Office of the Inspector General. They thus serve as examples of the types of documents that might be needed to support future audits—one of the purposes for which MCC requires the partner governments to retain compact-related information for at least 5 years. Records and information management is important in all government agencies, in part because it helps ensure that the agencies remain transparent and accountable to the public and allows for congressional and executive branch oversight. MCC established a records management program that, according to NARA, is comparable to many others in the federal government. Yet, as an international aid agency providing bilateral assistance to partner governments, MCC’s situation regarding records and information management is atypical: Much of the information related to its core business is generated by the partner governments’ accountable entities, the MCAs. In accordance with NARA guidelines, MCC has established policies and guidelines stipulating that the MCAs must provide it with the compact management information it classifies as U.S. federal records. However, because its policies do not call for, and it does not conduct, systematic reviews of the records it receives, MCC cannot be sure that it is meeting the Federal Records Act’s requirement that it preserve all records documenting its functions, activities, decisions, and other important transactions. MCC also has established policies that require partner governments to retain other compact-related information for at least 5 years after the compact closes, to support audits and its own program evaluations. However, for the five closed compacts that we reviewed, the variations in the partner governments’ plans for retaining compact-related information could make it difficult for MCC to verify that the appropriate information is being retained. While MCC provides the partner governments a list specifying what types of documents it classifies as compact management records needed for storage at MCC headquarters, it does not provide such a list for other compact-related information expected to be retained in-country by the partner governments. A standardized schedule of compact-related information to be retained by each partner government would improve MCC’s ability to find and use this information and increase MCC’s efficiency in comparing similar information across compacts. Last, while four of the five partner governments were able to provide the information we requested in our test of MCC’s system, the inability of one country—Mali, whose government is in transition—to produce any of the requested documents calls into question MCC’s policy of relying on partner governments to retain and store most compact-related information. While the situation in Mali is unusual, the recent political turmoil in Madagascar, another former MCC partner, shows that such situations are not unique. Given that the countries that MCC targets for aid are, by definition, in transition, MCC could benefit from taking precautionary steps—such as weighing the costs and benefits of storing more compact-related information at MCC headquarters—to protect and ensure access to compact-related information. We recommend that MCC’s Chief Executive Officer take the following three actions to strengthen MCC’s records and information management program: 1. Develop a policy requiring—and conduct—periodic reviews of each set of compact management records that MCC receives from partner governments, to ensure that the records are complete. 2. Revise program closure guidelines to include a sample document retention schedule, specifying standard types of compact-related information that most compacts would need to retain. 3. Review MCC’s policy of delegating the storage of compact-related information to partner governments, weighing the costs and benefits of storing more of this information at MCC headquarters. In written comments about a draft of this report, MCC stated that it agrees with our recommendations and is taking steps to implement them. With respect to our first recommendation—to develop a policy requiring, and to conduct, periodic reviews of each set of compact management records received from partner governments—MCC stated that, although it has been conducting selected reviews of compliance with compact records management requirements, making the practice more systematic would be useful. To that end, its Department of Compact Operations will ensure that reviews of MCC and MCA compliance with compact management records polices are incorporated in both implementation and close-out procedures. With respect to our second recommendation—to revise program closure guidelines to include a sample document retention schedule— MCC stated that it will consider how best to structure a standardized list of core documents that also preserves a country’s flexibility to tailor its document retention schedule in light of local laws and the specific types of compact projects. With respect to our third recommendation—to weigh the costs and benefits of storing more compact-related information at MCC headquarters—MCC stated that it will review, and revise as necessary, its Policy and Procedures for Compact-Related Federal Record Keeping to ensure that it specifies all documents that should be defined as federal records. We have reprinted MCC’s comments in appendix III. We have also incorporated technical comments from MCC in our report where appropriate. NARA also provided technical comments on a draft of this report, which we have incorporated as appropriate. In addition, NARA stated that having reviewed our description of MCC’s classification of federal records and “non-records” as they pertain to the MCAs, it will contact MCC to ensure that proper classification is occurring. We are sending copies of this report to interested congressional committees and the Millennium Challenge Corporation. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact David Gootnick at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix IV. Our objectives were to (1) examine the Millennium Challenge Corporation’s (MCC) records and information management program and practices and (2) assess partner-country governments’ implementation of MCC guidelines for retention and storage of compact-related information. To examine MCC’s records and information management program and practices, we reviewed MCC’s policies and guidelines regarding records and information management, focusing in particular on three documents: Records and Information Management Policy (June 2011), Policy and Procedures for Compact-Related Federal Recordkeeping (September 2012), and Program Closure Guidelines (May 2011). We also reviewed the results of MCC’s annual self-assessment surveys from 2009 through 2012, a tool that the National Archives and Records Administration (NARA) developed to assess agencies’ self-compliance with the Federal Records Act and other laws and regulations related to records management, and we reviewed relevant laws, regulations, and circulars produced by the Office of Management and Budget. In addition, we interviewed officials at MCC and NARA. To assess partner governments’ implementation of MCC’s guidelines for retention and storage of compact-related information, we selected five closed compacts—Armenia’s, Benin’s, El Salvador’s, Ghana’s, and Mali’s—to use as case studies. We chose these compacts because they closed after May 2011 and therefore were subject to MCC’s Program Closure Guidelines, which were finalized that month. We reviewed the documentation that the partner governments or their accountable entities (usually referred to as Millennium Challenge Accounts, or MCAs) had provided to MCC in response to those guidelines. Regarding MCC’s requirement that the partner governments make provisions for the form and manner of document storage, we reviewed the compacts’ program closure plans to ensure that provisions for document storage were included, but we did not verify that specific storage requirements—such as security and acclimatization—were met. We also conducted a test of MCC’s ability to retrieve compact-related information from partner governments after compact closure. For this test, we asked MCC to request that the partner governments for the case- study compacts provide copies of documents that the U.S. Agency for International Development’s (USAID) Office of the Inspector General (OIG) had collected during the course of performance and financial audits of the five countries. We selected the audits from a list that the OIG provided, and we drew from those audits a random sample of documents that we requested from the partner governments. We used OIG files because it has conducted audits in all 5 countries, whereas GAO has not. Selection of performance and financial audits. The OIG provided a list of 10 performance and 4 financial audits that it considered relevant to our case studies. We removed one performance audit from the list, because the OIG had conducted the audit prior to any MCC compact’s entry into force and the audit therefore would not yield valid documents. We then selected three performance audits and one financial audit to review for each of the case-study compacts (except Armenia’s, for which the OIG did not conduct a financial audit). Several of the performance audits on the OIG’s list covered more than one of the case-study compacts. Because Armenia and Benin’s compacts were both covered in two performance audits and Mali’s compacts was covered by three performance audits, we included all of these audits in our sample. Because Ghana’s and El Salvador’s compacts were each covered by more than three performance audits, we randomly selected among the relevant audits. Because the OIG conducted only one financial audit per compact (except Armenia’s), we selected all of the listed financial audits. See table 3 for a list of the audits that we selected from which we randomly drew supporting documents for our case studies. Random sample of audit documents. Each audit contained multiple files, from which we randomly drew a sample of 93 documents: 20 documents for Benin’s, Ghana’s, and Mali’s compacts; 18 documents for El Salvador’s compact; and 15 documents for Armenia’s compact. The number of documents that we sampled per compact varied for two reasons: (1) because the OIG has not conducted a financial audit for Armenia’s compact, we were unable to select any financial-audit- related documents for our sample, and (2) the performance audit files for El Salvador’s compact contained only 18 appropriate documents. Table 4 shows the number of documents we sampled per compact, by type of audit (financial or performance). Requests for sampled documents. We provided a list of the randomly sampled documents for each case-study compact to MCC. We identified each document using, as appropriate, its title, date, and other identifying information (e.g., contract number, payment order number, beneficiary name, letter recipient). For Benin’s, Ghana’s, El Salvador’s, and Mali’s compacts, we listed the document titles and other information in the document’s original language (English, French, or Spanish). For Armenia, we translated the title and other information into English when necessary. We asked MCC to share these lists with the five partner governments and to request that they send us copies of the documents, either electronic or paper, within 20 business days, in keeping with the Freedom of Information Act’s (FOIA) requirement. In response to an MCC comment that the documents we requested would not, as “non-records,” be subject to the FOIA requirement, we have reported the numbers of documents that the partner governments returned within 30 calendar days—the requirement stated in MCC’s Program Closure Guidelines. However, the numbers of documents returned within 20 business days were identical to the numbers returned within 30 calendar days. Verification of requested documents. To verify that the partner governments provided the documents we requested, we conducted two separate comparisons of the documents we received with corresponding electronic copies, which USAID’s OIG had allowed us to retain in our files. We conducted this performance audit from September 2012 to June 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Since 2009, the National Archives and Records Administration (NARA) has administered a survey to assess federal agencies’ compliance with federal records-keeping laws and regulations. The Millennium Challenge Corporation (MCC) has received the following scores: 92 (2009), 83 (2010), 76 (2011), and 77 (2012). See tables 5 through 8 below for more information. In addition to the contact named above, Emil Friberg, Jr. (Assistant Director) and Miriam Carroll Fenton made key contributions to this report. Additional technical assistance was provided by Reid Lowe, Christopher Mulkins, Justin Fisher, Martin de Alteriis, Nancy Hunn, Mark Bird, Etana Finkler, and Ernie Jackson.
MCC has approved 26 bilateral compact agreements, providing a total of about $9.3 billion to help eligible developing countries reduce poverty and stimulate economic growth. MCC is subject to the Federal Records Act, which requires that agencies preserve all records documenting its functions and other important transactions. GAO was asked to review MCC's management of records and information. This report (1) examines MCC's records and information management program and practices and (2) assesses partner governments' implementation of MCC's information retention guidelines. GAO analyzed MCC documents, interviewed MCC officials, and tested MCC's ability to retrieve compact-related information from five closed compacts. GAO selected these compacts because they closed after May 2011, when MCC's Program Closure Guidelines went into effect. In 2006, the Millennium Challenge Corporation (MCC) established a records and information management program to maintain and preserve its federal records. The program includes policies related to compact management records--a subset of MCC's federal records. These policies also address the handling of other compact-related information generated by MCC partner governments' accountable entities, which typically manage compact implementation until the 5-year compacts close. MCC's policies require that the entities transfer their compact management records to MCC for storage before compact closure. MCC also requires that partner governments retain compact-related information not classified as records, such as survey data and data quality reviews, for at least 5 years after their compacts close, to facilitate audits and analysis of MCC assistance. However, MCC does not require, and has not conducted, periodic reviews to determine whether it has received all compact-management records from the accountable entities consistent with federal internal control standards. As a result, MCC cannot be sure that it is meeting the federal requirement that it preserve all records documenting its functions, activities, and other transactions. In reviews of five closed compacts--Armenia's, Benin's, El Salvador's, Ghana's, and Mali's--GAO found variation in the accountable entities' implementation of MCC document retention requirements and the partner governments' ability to retrieve requested compact-related information after the compacts closed. As required by MCC's compact closure guidelines, all five program closure plans that we reviewed contained some discussion of retaining and storing documents, but each accountable entity addressed the guidelines' requirements differently. MCC's guidelines do not provide a list specifying standard types of compact-related information that most compacts should retain. Such variation in approaches to retaining and storing compact-related information will make it more difficult for MCC to verify that standard compact information is retained in all partner countries after the compacts close. In addition, in a test of MCC's ability to retrieve documents from the partner governments after compact closure, GAO found that four of the five governments provided all or most requested documents within 30 days, but Mali's, which is involved in political turmoil, provided no documents. Political turmoil in Madagascar, another compact-recipient country, has also impeded MCC's ability to obtain compact information that may be needed to conduct future audits, evaluate project impact, or inform future compact designs. To strengthen MCC's records and information management program, MCC's Chief Executive Officer should (1) develop a policy requiring--and conduct--periodic reviews of MCC's compact-management records to ensure they are complete, (2) revise guidelines to include a sample document retention schedule specifying standard types of compact-related information compacts should retain, and (3) review MCC's policy of delegating storage of most compact-related information to partner governments. MCC agreed with all three recommendations and stated that they have already taken steps to implement them.
In 2001, there were about 4.8 million gastroenterological procedures and about 306,000 urological procedures performed on Medicare beneficiaries nationwide that were conducted at least 90 percent of the time in health care facilities and less than 10 percent of the time in physicians’ offices. About 3.3 percent (or about 156,000) of these gastroenterological procedures and 3.8 percent (or about 12,000) of these urological procedures were conducted in physicians’ offices. About 35 percent of all office-based gastroenterological endoscopic procedures were conducted in the New York City metropolitan area. Medicare regulates ASCs and other health care facilities that conduct endoscopic procedures by requiring that they satisfy conditions related to safety, facility design, staff expertise, and other factors in order to treat Medicare beneficiaries. If an ASC is accredited by a national accrediting body or licensed by a state agency that provides reasonable assurances that the conditions are met, CMS may deem it to comply with most requirements. These conditions include, for example, the following: Compliance with state licensure requirements. An effective procedure for immediate transfer to hospitals of patients needing emergency medical care beyond the capabilities of the ASC. Safe performance of surgical procedures by qualified physicians granted clinical privileges by the ASC under Medicare-approved policies and procedures. Ongoing comprehensive self-assessment of the quality of care with active participation of the medical staff. Use of a safe and sanitary environment, properly constructed, equipped, and maintained to protect the health and safety of patients. Provision of adequate management and staffing of nursing services to ensure that nursing needs of all patients are met. Maintenance of complete, comprehensive, and accurate medical records to ensure adequate patient care. Safe and effective provision of drugs and biologicals under the direction of a responsible individual. According to the American College of Surgeons, nine states have guidelines or regulations pertaining to the safety of office-based surgical procedures (including endoscopy) that address issues of Medicare certification, state licensure, accreditation, and inspection of physicians’ offices: In California, state licensure, Medicare certification, or accreditation is required for all outpatient settings where anesthesia is used. In Connecticut, state regulations require any office or facility operated by a licensed health care practitioner or practitioner group to be accredited by a nationally recognized body if sedation or anesthesia is used. In Florida, the state is required to inspect a physician’s office where certain levels of surgery (including endoscopy) are performed, unless a nationally recognized accrediting agency or another accrediting organization approved by the Board of Medicine accredits the office. In Illinois, state regulations allow the delivery of anesthesia services by a certified registered nurse anesthetist in the office only if the physician has training and experience in these services. In Mississippi, physicians conducting office procedures must register with the state, maintain logs of surgical procedures conducted, follow federal standards for sterilization of surgical instruments, and report any surgical complications to a state board. In New Jersey, state regulations have been developed to establish training programs for physicians who utilize anesthesia in their office practices. In Rhode Island, state regulations require licensure for offices in which surgery, other than minor procedures, is performed. Accreditation by a nationally recognized agency or organization is also required. In South Carolina, guidelines address the safe delivery of anesthesia, the presence of emergency equipment, procedures to transfer emergency cases to hospitals, and physician training. In Texas, regulations govern physicians in outpatient settings providing general or regional anesthesia. In addition, organizations such as the American Society for Gastrointestinal Endoscopy and the Society of American Gastrointestinal Endoscopic Surgeons publish safety guidelines that are similar to the Medicare guidelines for ASCs. These guidelines are designed to ensure that endoscopies are conducted safely regardless of whether they are conducted in health care facilities or physicians’ offices. However, the Medicare program does not regulate physicians’ offices and does not make judgments about the safety of procedures conducted there. In 1992, the Health Care Financing Administration (HCFA) began the implementation of a resource-based physician fee schedule for the Medicare program. The physician fee schedule is applicable to procedures conducted in a variety of health care settings, including hospitals, ASCs, and physicians’ offices. Under this fee schedule, physician payments are based on relative amounts of resources needed to provide procedures regardless of the health care setting. The physician fee schedule includes three components. The physician work component (implemented in 1992) provides payment for the physician’s time, effort, skill, and judgment necessary to provide a service. The malpractice insurance component reimburses physicians for the expense of their professional liability insurance. The practice expense component compensates physicians for direct expenses, such as clinical staff salaries, medical supplies, and medical equipment and indirect expenses, such as administrative staff salaries and other office expenses incurred in providing services. Unlike the other two components, physician practice expenses can differ depending on where the procedure is performed. In the office setting, the physician is responsible for providing clinical staff, supplies, and equipment needed to perform a service. In the facility setting, such as a hospital or ASC, these are the responsibility of the facility. Medicare’s practice expense payments to physicians can differ depending upon the medical setting to reflect these differences. For medical facilities, practice expense payments to physicians are generally lower, because Medicare pays for nursing support, equipment, and supplies needed with a separate facility fee. However, when these procedures are performed in an office, Medicare pays physicians for these expenses in the practice expense portion of the physician fee schedule. The differences in practice expense payments for the same procedure are referred to as the site-of-service differential. In 1999, HCFA began a now completed 3-year phase-in of the site-of-service payment differential, as a part of the resource-based practice expense system. In previous work, we found that HCFA used acceptable methodology and relied on the best data available to develop the practice expense component of its Medicare payment system of which this payment differential is a result. Medicare’s higher payment for office- based procedures reflects the higher expenses to the physicians of providing those procedures, but this payment may not cover all of their expenses. We found no evidence to suggest that the level of safety of gastroenterological or urological endoscopy conducted on Medicare beneficiaries differs by medical setting. In our search of the relevant scientific literature maintained by the National Library of Medicine and in discussions with Medicare carrier medical directors, physicians, and physician specialty societies, we found no evidence of a higher occurrence of medical complications from office-based gastroenterological and urological endoscopic procedures relative to other medical settings. Furthermore, according to a major trade association representing medical malpractice insurance companies, the pricing policies of insurance companies indicate that those companies do not believe that office-based endoscopy poses additional safety risks. Our search of relevant scientific literature maintained by the National Library of Medicine and discussions with physicians revealed little evidence of complications associated with office-based endoscopy for gastrointestinal and urological procedures. The scientific literature on the safety of office endoscopy is sparse; we were able to locate only one published study. This study of upper gastrointestinal procedures conducted in France showed very few complications over the course of nearly 18,000 endoscopic procedures. In this study, there was one death (the patient had previously diagnosed heart disease), one case of breathing difficulty (considered avoidable by the authors), and five other minor incidents. During the 10,000 exams performed over the last 12 years of this 17-year study, no clinically significant incidents occurred. We discussed the safety of office-based endoscopy with physicians, including representatives of three organizations critical of the CMS practice expense site-of-service differential policy. We also discussed in- office safety issues with four Medicare carrier medical directors, including those in New York where there is a relatively high proportion of office procedures conducted. All of these officials, including the critics of the policy, emphasized that the procedures as currently conducted are safe and that complications are extremely rare. According to the Physician Insurers Association of America, a trade association that represents the malpractice insurance industry, office- based endoscopy is not riskier than endoscopy conducted in health care facilities. For example, two large New York malpractice insurance companies do not levy a surcharge on physicians who conduct office- based surgery, including the endoscopic procedures included in our study. One of these New York companies, which has the largest market share nationwide (and 57 percent of the malpractice insurance market in New York) does not consider office-based surgery an issue when setting rates for its clients. The other New York company requires physicians who conduct surgery in their offices to follow its company standards for equipment and safety backup procedures, and it reserves the right to conduct unannounced inspections of their offices. It does not, however, impose a surcharge on physicians for office-based procedures. It does require a surcharge for endoscopic procedures, but the amount does not differ by medical setting. Although the site-of-service Medicare payment differential for the 12 common gastroenterological endoscopic procedures in our study has increased since the practice expense component of the resource-based fee schedule began to be implemented in 1999, the percentage of these procedures performed in the office has not increased. The average Medicare practice expense payments for the 12 gastroenterological endoscopic procedures are presented in figure 1. The figure shows that the payment differential has increased both because the average practice expense payments for procedures performed in health care facilities have decreased substantially (from $133 in 1998 to $59 in 2002) and because the payment for office-based procedures has nearly doubled (from $143 in 1998 to $277 in 2002). The payment differential for urological procedures has similarly increased since the average practice expense payments for such procedures performed in health care facilities have decreased by more than half (from $218 in 1998 to $83 in 2002) and because the average payments for office-based procedures have more than doubled (from $218 in 1998 to $448 in 2002.) The nationwide percentage of common office-based gastroenterological and urological endoscopic procedures conducted on Medicare beneficiaries has not increased (see fig. 2). For example, the percentage of the gastroenterological procedures in our study conducted in the office nationwide declined from about 4.8 percent in 1996 to 3.9 percent in 1998, the last year of the old practice expense payment system, and to 3.3 percent in 2001 as the phase-in of the new practice expense system approached completion. Similarly, the percentage of the urological procedures in our study declined from about 5.7 percent in 1996 to 4.7 percent in 1998 to 3.8 percent in 2001. From 1996 through 2001 in the New York City metropolitan area, where about 35 percent of the nationwide Medicare-covered office procedures were conducted, the proportion of office-based endoscopic procedures for gastroenterology has remained fairly constant at slightly less than 30 percent. During the same period, the proportion of office-based urological procedures in our study has declined from 11 percent to 8 percent. However, regardless of geographic area, these findings must be interpreted with caution. It is too early to determine the full effects of the new practice expense system’s payment differential, as it was not fully implemented until 2002. We were directed by BIPA to assess whether the access to care by Medicare beneficiaries would be adversely affected if gastroenterological procedures conducted in physicians’ offices were no longer reimbursed by Medicare. If this occurred, patients in most of the nation would not likely experience access problems for the procedures in our study, given that relatively few procedures are performed in the office setting. However, some New York City metropolitan area Medicare patients might have initial difficulty obtaining care. In 2001, 28 percent, or about 54,000, of the gastroenterological procedures for Medicare patients in the New York City area were conducted in physicians’ offices, accounting for about 35 percent of these office procedures nationwide. According to CMS data, the New York City area has the largest proportion and total number of office- based gastroenterological procedures of any geographic area in the nation. In our review of CMS data on the geographic dispersion of office procedures, we have been unable to locate other areas of the country with such a major reliance on the availability of office-based gastroenterological endoscopy. If Medicare coverage for the common endoscopic office procedures included in our study were withdrawn, medical facilities might not have the capacity to absorb the displaced patients in the short term, according to a New York State Department of Health official and Medicare carrier directors. However, in 1998, New York State eased requirements for approval of new ASCs, and, as a result, medical facility capacity has recently begun to increase in the state and in the New York City area. New York requires an approved certificate of need (CON) in order to approve a new ASC. To obtain a CON, the need for the services of a proposed ASC must be demonstrated for specific geographic areas. According to a New York State Department of Health official, the rules for CON approval were relaxed significantly in March 1998, and nearly all applications are currently being approved. Since March 1998, there has been an increase of almost 200 percent in the number of ASCs in New York, including major increases in the New York City area. CON approvals can be obtained in the New York City area because most area hospitals are operating at capacity. In the future, if ASCs are equipped to offer the gastroenterological procedures included in our study, it is possible that they could accommodate displaced patients, if they are located in areas accessible to these patients. In contrast, only about 8 percent of the urological procedures in the New York City area were conducted in offices, so the elimination of Medicare reimbursement would likely have a minimal effect on the delivery of these procedures. Some critics of the Medicare site-of-service payment differential for endoscopic procedures have questioned the practice of conducting them as office procedures because of concerns about patient safety. They have suggested that the differential provides an incentive to the physician to provide endoscopic procedures in a setting—the physician’s office—that is less safe than another setting, such as a hospital or an ASC. But in our review of common gastroenterological and urological endoscopic procedures, we found no evidence that safety problems are greater for these procedures conducted in physicians’ offices. Furthermore, we found that the proportion of common office-based gastroenterological and urological endoscopic procedures included in our study has not increased as the site-of-service differential has been phased in. However, because the payment differential has been in effect only since 1999 and was not fully implemented until 2002, it is too early to tell whether it will affect the percentage of procedures conducted in the office in the future. If the common office-based endoscopic procedures included in our study were no longer reimbursed by Medicare, most areas of the country would not develop patient access problems. However, the initial effects in the New York City metropolitan area—where there is a predominance of office- based procedures—could be problematic, although the increase in ASCs in the New York City area could mitigate patient access problems in the future. CMS provided written comments on a draft of this report, and concurred with the general findings in the study (see app. III). The agency provided technical comments, which we have addressed where appropriate. We are sending this report to the CMS Administrator and interested congressional committees. We will also make copies available to other interested parties on request. In addition, the report available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512- 7101. Major contributors to this report are listed in appendix IV. This appendix provides detailed information on the gastroenterological and urological procedures that we selected for our study. It also describes the methods that we used to address the study’s main objectives. We selected the 12 gastroenterological and 8 urological endoscopic procedures that are ordinarily performed in health care facilities and that we defined as being conducted at least 90 percent of the time in health care facilities and less than 10 percent of the time in offices. These gastroenterological and urological procedures are common types of endoscopy. These procedures have a practice expense site-of-service differential. The procedures included in our study accounted for about 30 percent of the total number of gastroenterological and urological endoscopic procedures conducted for Medicare beneficiaries in 2001; about 3.5 percent of the procedures in our study were conducted in offices. Many of these procedures require the use of sedation and entail some risks for patients. Our results are not generalizable to other endoscopic procedures. Tables 1 and 2 provide detailed information on the 20 procedures included in our study. To assess the safety of office-based endoscopy, we reviewed the scientific literature and interviewed physicians; four Medicare carrier medical directors in the New York City area; North Dakota; and Wyoming; a representative of Physicians Insurance Association of America; an official from a trade association that represents the medical malpractice insurance industry; and representatives of two large New York malpractice insurance companies. We also interviewed interest group representatives, including members of the American College of Gastroenterology, American Society for Gastrointestinal Endoscopy, American College of Surgeons, American Gastroenterology Association, and American Urological Association. We also reviewed regulations and guidelines on physician office-based endoscopy in the nine states that have such regulations and guidelines. These states are California, Connecticut, Florida, Illinois, Mississippi, New Jersey, Rhode Island, South Carolina, and Texas. To assess whether the practice expense site-of-service payment differential acts as an incentive for physicians to conduct gastroenterological and urological endoscopic procedures in their offices, we analyzed data from the Centers for Medicare & Medicaid Services (CMS) using the Part B Extract and Summary System on the medical settings (office, inpatient hospital, outpatient hospital, and ambulatory surgical center) for relevant procedures for 1996 through 2001. For the gastroenterological and urological procedures in our analysis, we developed averages of practice expense reimbursements for health care facilities and offices for each year from 1998 through 2002. To determine whether access to care by Medicare beneficiaries would be affected if endoscopic procedures in physicians’ offices were no longer reimbursed by Medicare, we analyzed CMS data (using the Part B Extract and Summary System) on office-based endoscopy for the nation as a whole and for the New York City area, which has the highest proportion of office-based procedures in the nation. We interviewed Medicare carrier medical directors in several locales with a range of population size and density, including the New York City area, North Dakota, and Wyoming. Tables 3 and 4 summarize the percentages of gastroenterological and urological endoscopic procedures in our sample performed in physicians’ offices, hospitals (both inpatient and outpatient), and ASCs for 1996 through 2001. In the data provided to us by CMS, there was another medical setting category (“other”) that captured a broad variety of medical settings, including nursing facilities, rural health clinics, and military treatment facilities. The proportion of procedures conducted in these settings was very low, about 1 percent or less. In 1999, some of the claims data were coded incorrectly, and the Health Care Financing Administration inaccurately assigned larger proportions to the “other” category (from 5 to 9 percent). Because of this confusion, we have eliminated the “other” category from the analysis for 1999 and the other years to ensure consistency in comparisons. Our reanalysis affects the results for 1999 because it is unclear where the claims categorized as “other” should have been categorized. However, because of the relatively few cases affected, we do not believe that this error affects our analyses or conclusions. Lawrence S. Solomon, Martin T. Gahart, Vanessa Taylor, Wayne Turowski, Roseanne Price, and Mike Thomas made major contributions to this report.
Every year millions of Americans covered by Medicare undergo endoscopic medical procedures in a variety of health care settings ranging from physicians' offices to hospitals. These invasive procedures call for the use of a lighted, flexible instrument and are used for screening and treating disease. Although some of these procedures can be performed while the patient is fully awake, most require some form of sedation and are usually provided in health care facilities such as hospitals or ambulatory surgical centers (ASC). Some physician specialty societies have expressed concern that Medicare's reimbursement policies may offer a financial incentive to physicians to perform endoscopic procedures in their offices and that these procedures may be less safe because physicians' offices are less closely regulated and therefore there is less oversight of the quality of care. For the 20 procedures reviewed, there was no evidence to suggest that there in any difference in the level of safety of gastroenterological and urological endoscopic procedures performed on Medicare beneficiaries in either physicians' offices or health care facilities, such as hospitals and ASC's. There was also no evidence found to suggest that the resource-based site-of-service payment differential has caused physicians to conduct a greater proportion of gastroenterological or urological endoscopic procedures in their offices for Medicare beneficiaries. If Medicare coverage for the office procedures in the study were terminated, few access problems would occur in most of the country because physicians perform the vast majority of the procedures that were studied in health care facilities.
Our 15 cases show that individuals with histories of sexual misconduct were hired or retained by public and private schools as teachers, administrative and support staff, volunteers, and contractors. In at least 11 of these 15 cases, schools allowed offenders with histories of targeting children to obtain or continue employment. Even more disturbing, in at least 6 of the cases, offenders used their new positions as school employees or volunteers to abuse more children after they were hired. We identified the following factors contributing to these employment actions. Voluntary Separations and Positive Recommendations: In four of the cases we investigated, school officials allowed teachers who would have been subject to disciplinary action for sexual misconduct toward students to resign or otherwise separate from the school rather than face punishment. As a result, these teachers were able to truthfully inform prospective employers that they had never been fired from a teaching position and eventually were able to harm more children. In three of these four cases, school officials actually provided positive recommendations or reference letters for the teachers. We found that suspected abuse was not always reported to law enforcement or child protective services. Examples from our case studies include the following. An Ohio teacher was allowed to resign after a school investigation revealed he was having relationships with students that were “too much like boyfriend and girlfriend.” However, district officials felt that they still did not have enough evidence to fire the teacher. Subsequently, the school superintendent wrote him a letter of recommendation, which the offender used to apply to a second Ohio school district, describing him as possessing “many qualities of an outstanding teacher.” The school did not provide us with any evidence that this suspected abuse was reported to law enforcement or child protective services. The teacher was later convicted for committing sexual battery on a sixth grade girl at the second Ohio school district. A Connecticut public school district compelled a teacher to resign after he accessed pornography on a school computer. Although the school district reported the abuse to child protective services, a district administrator told another Connecticut school seeking a reference that they would rehire the teacher “without reservation.” A second Connecticut school district also compelled him to resign, but his separation specifically directed all inquiries from future employers to the superintendent and agreed that he would provide a letter of recommendation. This school district also provided him with positive references. He was eventually hired by a third Connecticut school district, where he was convicted of sexually assaulting two students. A Louisiana private school district allowed a teacher’s contract to expire after his eighth grade students searched his name on the Internet and discovered he was a registered sex offender. The school did not pursue action or notify authorities, but did provide him with a letter of recommendation, which he used to apply to another Louisiana school, which eventually hired him. There, he is alleged to have engaged in inappropriate conversations with a student using an instant messaging service. The school officials we interviewed cited a variety of reasons for allowing the resignations and providing the recommendations. One administrator told us that it could cost up to $100,000 to fire a teacher, even with “a slam dunk case.” Other officials told us that, depending on the terms of a separation agreement, school administrators may not be able to provide anything less than a positive recommendation for an employee for fear of potential lawsuits. One expert we spoke with noted that it is often easier and faster for school administrators to remove a problem teacher informally in order to protect the children within their own district, especially when the administrator agrees to provide a positive recommendation to encourage a resignation. Nonexistent Preemployment Criminal History Checks: In 10 of our 15 cases, school officials did not perform preemployment criminal history checks on prospective employees, including teachers, administrative staff, maintenance workers, volunteers, and contractors. As a result, registered sex offenders were allowed to gain access to both public and private schools. In 7 of these 10 cases, the offenders had been convicted for offenses against children and in at least 2 of the cases, they subsequently committed sexual crimes against children at the schools where they were working or volunteering. The documents we reviewed and the officials we spoke with indicated that the schools chose to forego these checks for a variety of reasons, including that they felt that the process was too time- consuming and costly or that the positions in question would not require daily interaction with children. We found that although the cost of performing a criminal history check varies by state, generally a fingerprint- based national and state check ranges from $21 to $99, paid by either the applicant or the school, and takes as long as 6 weeks to complete. Some schools also told us that they do not perform criminal history checks for support staff, such as maintenance workers, until after they have reported to work. Examples from our case studies including the following. An Arizona public school hired a teacher who had been convicted in Florida for lewd and lascivious acts with a minor. The school chose not to conduct a criminal history check on the teacher because it was in a hurry to fill the position. Ultimately, the offender was arrested and convicted for sexually abusing a young female student at the school. A church-run private school in Ohio employed a maintenance worker who had been convicted in California for lewd and lascivious acts with two minors. The school told us it did not conduct a criminal history check because the maintenance worker was supposed to work primarily for the church that operated the school. However, officials told us that he had regularly worked at the school and frequently interacted with the children, going so far as to buy them meals. In New York, a public school employed a maintenance worker for 5 months until the results of a criminal history check conducted after he had already reported to work revealed that he had been convicted of raping a woman at knife point and was classified as a threat to public safety. A Florida public school allowed an individual who was convicted of having sex with an underage male to work as a volunteer coach without a criminal history check, even though school policy provided that volunteers would be subject to such checks. He was eventually arrested for having sexual contact with a student on one of the school’s sports teams. As we previously noted, state laws with regard to employing sex offenders and conducting criminal history checks vary widely; see appendix I for an overview. Inadequate Criminal History Checks: Even if schools do perform criminal history checks on employees, they may not be adequate because they are not national, fingerprint-based, or recurring. For example: Schools in eight of our cases told us that they conducted state criminal history checks, which only reveal offenses committed by a prospective employee in the state where it is conducted. These schools were located in California, Ohio, New York, Michigan, and Louisiana. Although we did not identify any cases where conducting a state criminal history check resulted in hiring an employee who committed an offense in another state, such an outcome is highly likely. We identified one school in Michigan that used a name-based criminal history search to hire an administrative employee. This online search required officials to search for the precise name under which an individual’s criminal background is recorded. However, the officials used a common nickname instead of the applicant’s full name, so the search did not reveal his eight convictions, which included various sex offenses. A fingerprint criminal history check would likely have revealed these charges. None of the schools we spoke with indicated that they perform recurring criminal history checks. In fact, only a few states have laws requiring schools to conduct such recurring checks intended to identify individuals if they commit offenses while they are employed at schools. For example, we identified two cases where sex offenders were currently employed by California public schools, despite the fact that California has a “subsequent arrest notifications” process to track the criminal history of employees after they are hired. For example, one school never received a subsequent arrest notification when one of its maintenance workers was convicted of sexual battery in 1999. Since they conducted no recurring criminal history checks, school officials were unaware of the employee’s conviction until we notified them during the course of this investigation. In the other case, school officials received notice of an administrative employee’s 2000 arrest for the molestation of a minor, but did not terminate his employment because they believed they were not legally obligated to do so. These officials subsequently left the school district and did not notify current staff about the arrest. Current officials told us they did not have any reason to examine the offender’s employment file during their tenure. Consequently, these officials were not aware that they were employing a convicted sex offender until we notified them. A recurring background check would likely have alerted current staff to the offense. Red Flags on Employment Applications: Many of the schools we spoke with require job applicants to self-report basic information regarding their criminal background, but in three of our cases, schools failed to ask applicants about troubling responses. For example, an applicant for an Arizona teaching position answered yes when asked if he had been convicted of “a dangerous crime against children.” However, that school could provide no information to suggest that it followed up with the applicant or law enforcement about this admission before hiring the offender. The offender eventually was arrested for sexually abusing a young female student at the school. In the two remaining cases, applicants did not provide any response when asked about previous criminal history and school officials could not provide evidence that they had inquired about the discrepancy or required the applicant to provide the information. For example, a Michigan public school hired an administrative employee who had multiple convictions for sexual offenses. On his application, the offender did not respond to a question about whether he had ever been convicted of a crime, though he answered every other question on the application. Similarly, a California charter school hired an administrative employee who failed to answer a question about previous felony convictions, even though he had been convicted of a felony sex offense against a minor. Table 1 provides a summary of the 15 cases we examined; a more detailed narrative on seven of the cases follows the table. Case 1: After being forced to resign from teaching at one Ohio public school system due to allegations of inappropriate relationships with female students, this offender received a letter of recommendation, and was hired to teach at a second Ohio public school district where he was later convicted of sexual battery against a student. In August 1993, the offender began working at the first Ohio public school district as a teacher and also coached several sports. During his fourth year of teaching, an investigation confirmed that the teacher was acting inappropriately toward multiple female students. According to the summary of this investigation, the superintendent found that multiple coworkers agreed that the teacher’s relationships with female students were “too much like boyfriend/girlfriend.” Coworkers also noted that the teacher was found in a room with the lights off supposedly counseling a female student on more than one occasion and that he would become overly infatuated with a single girl each year. Further interviews with students, parents, and the teacher himself corroborated these allegations. For example, parents of the female athletes he coached agreed there was generally too much touching of the players. One student noted that a number of girls dropped out of his class because of the way he behaved around female students. When confronted with allegations that some of his behavior was inappropriate, the teacher responded that “the girls loved what was doing.” The school did not provide us with any evidence that this suspected abuse was reported to authorities. According to the current superintendent, district officials did not feel they had enough evidence to terminate the teacher and therefore gave him 1 year to find a new job. The teacher submitted his letter of resignation in April 1997, effective in July 1997. Despite having requested his resignation, the former district superintendent provided the teacher with a letter of recommendation which noted that the teacher “exhibited many qualities of an outstanding teacher” and “has an outgoing personality which is an asset in this area of instruction.” In contrast, the former superintendent also sent a letter directly to the teacher saying that the teacher was at least guilty of “poor judgment” and “behavior unbecoming a professional educator.” Although we were unable to locate the former superintendent to ask why he wrote such conflicting letters, the current superintendent said he believed that the former superintendent feared that the teacher would file a lawsuit if he disclosed any incriminating information. Two months after his resignation, the teacher used the letter of recommendation from the former superintendent to apply for a position as a teacher at a second Ohio public school district. The teacher worked at the second school for nearly a decade, until 2006, when he was indicted on two counts of sexual battery by the county prosecutor. This indictment alleged that, several years prior, he committed sexual battery on a sixth grade girl while in a position of authority and employed by a school. The detective who investigated the case said that the local police department found out about the sexual battery years after it occurred because the victim decided to come forward with the allegations. During the investigation, the police obtained undercover recordings where the teacher incriminated himself by describing sexual acts performed between him and the victim. According to the detective who investigated the sexual battery case, the second school district was never informed of any allegations of inappropriate conduct by the first school district. In May 2006, the teacher pled guilty to both counts of sexual battery and was sentenced to 2 years in state prison. Case 7: This administrative employee was convicted of misdemeanor sexual battery while employed at a California public school district. Even though the school district was notified of his arrest and conviction by police in 2000 and by GAO in July 2010, district officials decided to retain him as an employee. After we referred this case to the California Attorney General and the California Department of Education, the school district placed this individual on administrative leave. He has since resigned. In August 1998, this man was employed as an administrative employee in a California public school district. In February 2000, he molested a minor and the arresting officer charged the offender with a felony sex offense. In May 2000, a California court convicted him of misdemeanor sexual battery. The offender received a 120-day prison sentence and 3 years probation for the misdemeanor conviction and was required to register as a sex offender. Notes from the offender’s personnel file at the school district indicate that he may have served his prison time using personal leave, which was known to school officials. In March 2000, district officials were notified of the offender’s arrest by police through California’s subsequent arrest notification system, wherein the fingerprints a school employee submits during the hiring process are used to track any arrests occurring during his tenure as an employee. California law prohibits an individual convicted of an offense requiring registration as a sex offender from being hired or retained by a public school district. Once notified of the arrest, the offender’s lawyer, former district personnel officials, and a consulting lawyer for the district met to discuss whether the district could fire the offender. The district ultimately decided to retain him. According to the consulting lawyer for the district, the district believed that the offender’s continued employment was “within the letter and intent of California law.” In July 2010, we notified current district personnel officials that an administrative employee in their school system was in fact a registered sex offender. When we asked why he had been allowed to retain his position, a current district personnel official stated that no district officials were aware of his sex offender status, even though his employment file contained documentation on the arrest, charges, and conviction, as well as notes from the March 2000 discussion. The personnel officials explained to us that they did not have any reason to examine the offender’s employment file during their tenure. District officials stated that while all new applicants to the district are subject to a state criminal history check (including submission of fingerprints to the California Department of Justice), existing employees are not subject to recurring criminal history checks. Had a recurring criminal history check been performed, current personnel officials may have been made aware of the offender’s conviction. According to a current district personnel official, improved information sharing between former and current district personnel officials would have increased the likelihood of the school district taking appropriate action to safeguard students from the offender. In addition, while the offender had been registering his school employment with the local police in accordance with his sex offender registration requirements, police did not inform the school after the original subsequent arrest notification. After we referred this case to the California Attorney General and the California Department of Education, the school district placed this individual on administrative leave. He has since resigned. Case 8: This maintenance worker was convicted of misdemeanor sexual battery while employed by a California public school district. Since the district did not perform any recurring criminal history checks, district officials remained unaware of his conviction until we notified them. After this notification, district officials immediately confronted the offender, who resigned. In April 1985, this offender began employment in a California public school district as a maintenance worker. After he was hired, the offender groped a pregnant, blind woman and was subsequently convicted in California in 1999 for misdemeanor sexual battery. He received a 120-day prison sentence and 3 years probation, and was required to register as a sex offender. The offender later told school officials that he had served his prison sentence while on leave from the school district for a work-related injury. In 2009, the offender was promoted after over 2 decades of service in the same California public school district. On his promotion application, the offender falsely stated he was never convicted of a misdemeanor or felony. In July 2010, we notified school officials that this individual was currently employed in their district even though the California Education Code prohibits individuals convicted of sexual battery from retaining employment in California public schools. District officials then confronted the offender, who resigned immediately. Though the offender’s employment had continued for over a decade after his conviction, the officials told us that they were not aware of his status as a sex offender, despite California’s subsequent arrest notification process. The human resource official responsible for receiving subsequent arrest notifications confirmed that the offender had passed a fingerprint criminal history check when he was hired. Even though the offender’s fingerprints should have been on file, the district did not receive any notifications from California police about his conviction. In addition, district officials told us that school employees are not subject to recurring criminal history checks and confirmed that no documentation of the offender’s arrest or conviction existed in district records. District officials also told us that the offender had work-related injuries requiring absences from work. At the time of his resignation, the offender told school officials that one of those absences coincided with his prison term. We were unable to determine why the subsequent arrest notification process failed. However, a police officer involved with the maintenance worker stated that he had registered as a sex offender in accordance with annual requirements since his conviction. The officer, who just began working with sex offenders in 2010, noted that the offender correctly reported to law enforcement that he was currently employed by the California public school district. However, the officer stated that he did not question the offender further on his employment during their meetings even though California prohibits sex offenders from being employed at schools. The officer stated that he had no reason to believe the offender was inappropriately employed because the offender had been working in the California school district during each of the 12 years he had registered as a sex offender. Case 9: After being compelled to resign from teaching in two Connecticut school districts—due to accessing pornography on school computers at one district and for “performance reasons” at the other—this offender received positive recommendations from both districts and was hired to teach at another Connecticut school district, where he was convicted of sexual assault against two students. In early December 2003, a Connecticut public school district compelled a teacher to resign in the middle of his second year of teaching for accessing pornography on school computers. In mid-November, the school district had placed the teacher on paid administrative leave pending an investigation into allegations that his computer was used to access pornographic Web sites. According to one district official, the teacher claimed that he had allowed students to access his computer account, and that the students had accessed the pornographic Web sites. The school reported the potential child abuse to state authorities for investigation, but before taking further disciplinary action, the school district reached a separation agreement with the teacher. This agreement was signed by the school district, the teacher, and the local teachers’ union, and required the teacher to unconditionally resign. The agreement also required the teacher to waive all rights to file any claim against the school district related to his employment or separation from employment. The agreement did not contain a confidentiality or nondisclosure clause. The teacher submitted a letter of resignation stating that his separation was for “personal reasons,” effective December 2003. Beginning in January 2004, the teacher worked as a substitute teacher in a nearby school district, where he worked for the remainder of the school year, until obtaining a permanent position as a teacher in a third Connecticut school district in July. The application for teaching in this school district required the teacher to provide his employment background with employment dates, but did not ask for reasons for leaving any previous jobs. Although the school district did not require any references, the teacher submitted three letters of recommendation. One of those recommendations came from an administrator of the district which had forced the teacher’s resignation in December 2003 and was dated 1 week after the separation agreement was finalized. When we asked the district’s legal counsel why the administrator provided a positive recommendation, he told us that the administrator claimed that she was unaware of the reason for the teacher’s resignation and that she was only providing a positive recommendation regarding his classroom performance. In March 2007, the teacher again submitted a midyear resignation letter, although he taught through the end of that school year. According to one school district official involved in the process, the teacher’s resignation was requested for “performance reasons.” The school district and the teacher signed a confidential memorandum of understanding outlining the terms of the teacher’s resignation: the teacher would submit an irrevocable letter of resignation effective at the end of the school year stating “personal reasons.” The memorandum of understanding further stipulated that all requests for information regarding the teacher would be directed to the superintendent and that the superintendent alone would be allowed to provide references for the teacher. Despite the compelled departure from two school districts, in July 2007 the teacher received positive recommendations from both school districts when he applied for and obtained a similar teaching position at a high school in a fourth Connecticut school district. On the application that the teacher submitted for this job, when asked whether he had ever been fired by an employer or told he would be fired if he did not resign, the teacher responded “No.” As requested, the teacher submitted three references, all of which were from the most recent school district where he had worked. School officials told us that because the three submitted references only covered one of the two school districts listed as prior employers in the teacher’s application, they contacted the other district and spoke to an administrator to receive an additional reference. All fo ur references—including the administrator from the district which forced th teacher’s resignation for accessing pornography—gave positive review the teacher and stated that they would rehire him without reservatio n. According to one school official involved in the hiring process, the principal of the school from which the teacher was forced to resign for accessing pornography only stated that the teacher left his job because of “family issues and personal problems.” The same official told us that had the school known about the teacher’s forced resignations, it would have hired another candidate. In December 2008, during his second year at his new position in the fourth Connecticut school district, the teacher again resigned in the middle of the school year for “personal reasons,” this time when confronted by school administrators with allegations of having an inappropriate relationship with a 17-year-old student. At the time of his resignation, the teacher admitted to kissing the student. According to the superintendent, the district intended to suspend the teacher but was preempted by the teacher’s immediate voluntary resignation. The superintendent did request that the state Board of Education revoke the teacher’s certification. A subsequent investigation conducted by the police and the Department of Child and Family Services revealed that the teacher had intimate relations with two students, including sexual intercourse in the school’s auditorium. In 2009, he pled guilty to two counts of second degree sexual assault, was sentenced to 7 years in prison and 20 years probation, and required to register as a sex offender. Case 11: Despite allegedly engaging in a pattern of repeated sexual abuse of underage male students, this offender taught at several schools in Maryland and Virginia before recently pleading guilty to sexually abusing an underage student at a Virginia public school at which he taught. He is currently under investigation by state and federal authorities for numerous offenses dating back to 1978 and was indicted by a grand jury on multiple federal child pornography charges. The offender’s pattern of abuse against students began in the early 1990s. At that time, he was teaching English to students in Japan. In 1994, the offender accompanied an underage Japanese student on a trip to the United States for several weeks. The offender allegedly provided the student with sufficient alcohol to cause unconsciousness and then sexually abused him, as evidenced by video recordings and photographs kept by the offender. In 1999, after returning to the United States, the offender hosted an underage Danish exchange student who, during his stay, found pictures in the offender’s possession which indicated that the offender had abused him. According to the student, after a confrontation, the offender apologized and burned the photos, but investigators recently found copies of the photos remaining in the offender’s possession. At the time, the offender confided in someone regarding this incident, who subsequently contacted police. At the offender’s urging, the exchange student told police that nothing improper had happened. Based on the student’s statements, police discontinued the investigation. In November 2000, a public school district in Maryland hired the offender as a teacher. In 2002, the parents of a district student contacted the offender directly to request that he stop calling their son because they felt the contacts were inappropriate. While the parents did not contact the school district directly, rumors about inappropriate relationships reached the school board and the alleged inappropriate contact was a discussion point as the district was deciding whether to keep the teacher or quietly allow his contract to expire. We do not know whether school officials contacted local law enforcement about their suspicions. In June 2003, the offender’s contract with the district was allowed to expire. The district also banned the offender from district property. In September 2003, the offender began hosting an underage German exchange student. The foreign exchange company received complaints of threatening behavior about the offender from the exchange student and removed the student from the offender’s home immediately, with the help of local police. In May 2004, the student sought a restraining order against the offender, but the judge stated that the harassment described was not grounds for a restraining order and denied the request. The offender is alleged to have sexually abused this exchange student, again evidenced by videos, photographs, and other mementos kept by the offender. In August 2007, the offender began teaching in a Virginia public school district using multiple positive letters of recommendation as references. In September 2008, a concerned parent confronted the offender about inappropriate conversations with two underage boys (her son and a friend) through a networking Web site. The parent also provided copies of the inappropriate conversations to the school’s administration. The school’s principal spoke to the offender and told him not to have any contact with the two boys. The principal, in consultation with lawyers, a school human resource officer, and local police, determined that since no laws had been broken, the school had no grounds to dismiss the offender, despite the evidence provided by the parent. In February 2010, an underage student alleged that the offender provided him with alcohol and engaged in inappropriate sexual contact. The offender was arrested in February 2010 on felony charges for sex offenses involving a minor. He pled guilty to these charges and was sentenced to 1 year in prison in October 2010. During the investigation of this case, law enforcement officials discovered extensive evidence of sexual abuse of numerous unidentified underage males, including handwritten recollections, homemade videos, and photographs. He has been charged by federal authorities with numerous counts of possession of child pornography and transporting it across state lines. In addition, North Carolina police are currently investigating the alleged molestation of a 10- year-old disabled boy at the offender’s family home in 1978. Case 12: This offender was convicted of sexually assaulting a minor in Florida and subsequently worked as a teacher in a school in Arizona for 6 months without having his criminal history or educational qualifications verified by the school district. In March 1994, the offender was convicted in Florida of lewd and lascivious assault against a victim under the age of 16. The offender was given probation but was imprisoned for a violation from July 1996 to June 1999. Once released, he was required to register as a sex offender permanently. He moved to Arizona in 2001. In August 2001, an Arizona school district hired the offender as a teacher. On his application for the position, the offender was asked several questions regarding his criminal history, and he correctly responded that he had been convicted of “a dangerous crime against children,” but failed to provide the complete details of his conviction, as the application required. When we asked the school how they had responded to this disclosure, they were unable to provide any information to suggest that they had independently verified any of the offender’s responses or requested the missing details of his conviction. In addition, his resume listed an employment history including positions as a rental car worker, a lifeguard, and athletic trainer, but no history of classroom instruction. The teaching position he held also required a teaching certificate, but there is no documentation from the school to show that the offender received or submitted such a certificate. In addition to failing to verify his educational requirements, the school district neglected to conduct a criminal history check on the offender. Arizona requires criminal history checks for all public school employees. To complete the check, the applicant must turn in his/her fingerprints to the Arizona Department of Public Safety (DPS), which performs a state and federal criminal history check. Once the Arizona DPS completes the criminal history check and verifies that the applicant is suitable for school work, a fingerprint clearance card is issued, which the applicant must then send to the Arizona Department of Education. According to school officials, this process can take up to 90 days. In this case, the school district circumvented this requirement because it was anxious to fill the position before school started. Instead of treating the offender as an employee applying for a teaching position, the school district treated him as though he were applying for a nonteaching position, such as a food service worker or a bus driver. The school district performed a verbal reference check, and allowed the offender to provide a fingerprint clearance card at a later date. The district’s verbal reference check involved contacting employment references, provided by the applicant, and asking questions such as, “Has this applicant ever sexually abused a minor?” In this case, the offender provided references who gave glowing recommendations. As requested by the school, the offender eventually sent fingerprints to the Arizona DPS, but the Arizona DPS sent back a letter several months later stating that the fingerprint criminal history check could not be completed because the submitted fingerprints were smudged. A message was placed in the offender’s personnel file noting the need for him to complete the fingerprint criminal history check, but there was no indication of any additional follow-up by school officials on the subject. In January 2002, the offender was arrested for sexually abusing a young female student between December 2001 and January 2002. The offender was alleged to have touched the girl at the school and to have sent the girl sexually explicit letters. Officers investigating the case found multiple letters between the offender and the girl containing mature sexual content, some in a gym bag the offender was carrying at the time of his arrest. Police also found a home video recording of girls changing into bathing suits and walking around naked in a restroom. The offender could be heard adjusting the camera and talking on this video, which the Arizona police suspected was shot at a pool where the offender had previously worked as a manager. The offender was found guilty of felony sexual abuse and luring a minor for sexual exploitation in 2002. He was sentenced to 4 years in prison, as well as 15 years probation. In 2010, he was convicted for failing to register as a sex offender as required. He was sentenced to 12 years in prison, and is currently incarcerated. Case 13: In June 1998, this man was convicted for the second time for misdemeanor indecent exposure and was required to register as a sex offender. He was a teacher in Texas at the time, and remained there until May 2001, when his teaching certification was permanently revoked for engaging in a pattern of sexually inappropriate behavior. At least two schools in Louisiana, one private and one public, subsequently hired him without conducting criminal history checks. He continued to teach at the public school until October 2007, when he voluntarily resigned after being accused of having inappropriate sexual conversations with students. With the loss of his Texas teaching license in 2001, the offender taught in Mexico temporarily then moved to Louisiana. According to his resume, he worked at a series of Louisiana public and private schools from August 2002 until June 2006; we were unable to verify the circumstances leading to this employment. In June 2006, he was hired by a high school in a Louisiana private school district. The principal mistakenly assumed he had received a Louisiana criminal history check from a prior Louisiana school, and, desperate to hire teachers in the aftermath of Hurricane Katrina, allowed the offender to report to work without conducting a criminal history check. The principal did, however, contact a Louisiana private school that was listed as a previous employer for an oral reference, and the offender was highly recommended. He worked for 1 year on a year-to- year contract before eighth grade students identified him as a sex offender after conducting an Internet search for photos of him for a school event. His contract was allowed to expire, but no disciplinary actions were taken against him and we found no evidence that the school contacted law enforcement to report the offender’s presence in the school. After the expiration of this contract, the principal contacted the private school that had provided a positive reference for the offender to determine why she had not been provided with information on the offender’s past. The private school officials she spoke with stated that the specific individual who had provided the reference was a close friend of the offender, and that no one else at the private school would have provided a positive reference. The day before the beginning of the 2007-2008 school year, a principal from a Louisiana public high school hired the offender to begin immediately teaching, based on a resume appearing on an online job search Web site for prospective teachers. Because the hire occurred so close to the beginning of the school year, school officials told us they did not complete a state criminal history and reference checks before the offender reported for duty. School officials told us that, at that time, completing the state fingerprint background check generally took between 3 and 6 months. In his application to work for the school, the offender falsely stated that he had not been convicted of a criminal offense and that he held or was eligible for a teaching certificate in Texas. The offender further indicated that he was in the process of applying for a Louisiana teaching certificate; however, the Louisiana teacher certification database holds no record of the offender. He also provided a letter of recommendation from the principal of the private school that had allowed his contract to expire in 2007. When we spoke to the principal regarding this recommendation, she told us that she had never personally provided a positive reference for the offender, but that a subordinate may have drafted the letter in her absence. A few months into the school year, a parent of one of the students provided the principal with copies of inappropriate sexual conversations between the offender and a student over an instant messaging service. The school district began investigating the allegations and became aware of the offender’s criminal history. The superintendent of the district told us that he intended to take action against the teacher, but was preempted by the teacher’s immediate voluntary resignation. Police were notified of the allegations, which resulted in November 2007 charges of indecent behavior with a minor and failing to fulfill sex offender registration requirements. A warrant is currently out for his arrest on these charges. We found no federal laws regulating the employment of sex offenders in public or private schools and widely divergent laws at the state level, especially with regard to requirements and methods for conducting criminal history checks on employees. For a summary of laws related to the hire and retention of sex offenders by schools in all 50 states and the District of Columbia, see appendix 1. Federal law: The Adam Walsh Child Protection and Safety Act of 2006 requires the Department of Justice to conduct a criminal history check for employees who work around children at the request of a public or private school. This check allows for a fingerprint-based criminal history search of the Federal Bureau of Investigation’s National Crime Information Center database. However, federal law does not require schools to use this service. In addition, we found no federal laws that restrict the employment of sex offenders in public or private schools or that mandate criminal history checks for employees at these schools. Prohibitions working in or being present at schools: A majority of states have enacted laws to restrict sex offenders from having access to schools, but they may only apply to select types of schools (e.g., public schools) in certain situations. Eighteen states have broad restrictions prohibiting registered sex offenders from entering, or being a specified distance from, all schools. Seventeen states have some type of statute that specifically prohibits registered sex offenders from working or volunteering at or near schools. However, in some states such prohibitions may only apply to individuals who have been convicted of a felony. Criminal history check requirements for public and private school employees: These requirements vary widely. For example, 2 states do not appear to have any laws requiring criminal history checks for either public or private school employees. Twenty-five states and the District of Columbia require criminal history checks for all public school employees. Six states require criminal history checks for all public school employees and conditional checks for private school employees, often tied to such things as accreditation or acceptance of state scholarship funds. Seven states require that both public and private school employees undergo criminal history checks. The remaining 10 states require checks only for select employees in certain situations. For example: Four states require criminal history checks for licensed tea make no reference to other types of employees. Four states require checks for employees only if t unsupervised or direct contact with children. One state only requires criminal history checks for certified teachers and administrators if they have not been residents in the state for 5 years. One state requires individual public school districts to have a policy which determines which employees are subject to criminal history checks. Criminal history check requirements for contractors and volunteers: Only five states require criminal history checks for all contractors at both public and private schools. Seven states require criminal history checks for all contractors at public schools only. Other states require criminal history checks for contractors only under select circumstances, typically if they have direct access to children. Only eight states require criminal history checks for those volunteering with children. Method of conducting criminal history checks: As shown in appendix I, the vast majority of states specify that teacher and school employee criminal history checks are to be fingerprint-based and must check both national and state databases. However, not all states specifically require that criminal history checks be completed prior to an employee’s start date. In addition, two states limit the check to state databases, while another state limits the check to state databases if the employee or applicant has been a state resident for the prior 2 years. In addition, some states specify that criminal history checks must be reperformed at specified intervals and some states rely upon a system of subsequent conviction or arrest notifications, but often such systems only catch subsequent convictions or arrests in the same state and may miss such events that occur in other states. Termination of employment, revocation of license, or refusal to hire: Some states prohibit public schools from employing an individual convicted of a violent or sexual felony, while others have a broader prohibition that applies to both public and private schools, as well as to contractors and employees. Other states apply such mandatory disqualification criteria only to holders of teaching licenses or certificates. Requirements to report suspected child abuse: All 50 states and the District of Columbia have statutes that mandate that teachers and other school officials report suspected child abuse, including sexual abuse, to law enforcement, child protection agencies, or both. Typically, these statutes require the teacher or official to have a reasonable suspicion that abuse occurred before making such a report. Although these statutes were developed with the goal of preventing abuse by parents or guardians, they also cover abuse by a teacher or school employee. Furthermore, several states have adopted additional statutory precautions to ensure that abuse allegations against school employees are not suppressed by school officials; however, at least half of the states do not have any such additional statutory precautions. These statutes vary widely across the states and require mandatory reporting such as superintendents must report to the state education department or licensing board the resignation or dismissal of a licensed educator following reports of alleged abuse; superintendents must report to law enforcement crimes, including sexual abuse, committed on school property; and prosecutors must report to the state education department or licensing board felony convictions of licensed educators. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 10 days from the report date. At that time, we will send copies of this report to relevant congressional committees, the Department of Education, and the Department of Justice. In addition, this report will be available at no charge on GAO’s Web site at www.gao.gov. For further information about this report, please contact me at (202) 512- 6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Public and private schools are required to conduct fingerprint- based FBI and state criminal history checks of employees with unsupervised access to children. Individuals convicted of crimes involving the physical or mental injury, sexual abuse or exploitation, or maltreatment of a child are deemed unsuitable for employment. All teachers and school officials must report known or suspected cases of child abuse or neglect to a duly constituted authority. If the report is received by the Department of Human Resources, it must report it to law enforcement. Teachers are required to undergo a fingerprint-based national criminal history check as part of the certification process. School bus drivers must undergo a fingerprint-based national criminal history check. Individuals with a sex offense conviction may not hold a teacher certificate or a school bus driver license. All public and private school teachers and employees are required to report to the Department of Health and Social Services when they suspect that a child has suffered abuse or neglect. Law enforcement who receive a report of abuse by a teacher are required to report the fact to the Professional Teaching Practices Commission. Fingerprint-based federal and state criminal history checks are required of all certified teachers, public school employees, public school volunteers with unsupervised access to children, and employees of public school contractors and vendors. Teachers convicted of sex offenses are subject to mandatory permanent revocation of their teaching certificate. Licensed educators and school boards must report all reasonable allegations of misconduct by a licensed educator involving minors to the AZ Department of Education. All school personnel who reasonably believe that a child has been the victim of abuse must report to law enforcement or child protective services. Teachers are required to undergo fingerprint-based national and state criminal history checks as part of the licensing and renewal processes. School districts must conduct a fingerprint-based national and state criminal history check of all nonlicensed employees. Individuals with a felony or sex offense conviction may not hold a teacher license or be employed by a public school. Public school superintendents must report to the Board of Education any employee who is convicted of a felony or certain misdemeanors or who is the subject of a substantiated report in the Child Maltreatment Central Registry. School teachers and officials must notify the Child Abuse Hotline if they have reasonable cause to suspect a child has been subject to abuse. Fingerprint-based national and state criminal history checks are required for all certified teachers, public and private school employees, and public and private school contract employees who may have contact with pupils. Public schools may not employ persons convicted of sex offenses or violent or serious felonies. Individuals with a sex offense or violent or serious felony conviction may not hold a teacher certificate. Private schools must notify all parents before hiring a convicted sex offender. If a licensed educator is dismissed, suspended, placed on administrative leave, or resigns as a result of or while an allegation of misconduct is pending, the school must report the allegation to the Committee on Credentials. All public and private school employees must notify law enforcement or the county welfare agency if they know or reasonably suspect a child has been the victim of abuse or neglect. Fingerprint-based national and state criminal history checks and previous employer checks are required of all public school teachers. Fingerprint-based national and state criminal history checks are required of all public school employees. Private schools are authorized to conduct fingerprint-based national and state criminal history checks of their employees. Public and charter schools may not employ anyone with a felony or sexual offense conviction. If a public school employee is dismissed or resigns as a result of an allegation of unlawful behavior involving a child, the school district must notify the CO Department of Education. Any public or private school employee who has reasonable cause to know or suspect that a child has been subjected to abuse or neglect must notify law enforcement or the county human services department. Fingerprint-based national and state criminal history checks are required of all public school employees. Private schools are authorized to conduct fingerprint- based national and state criminal history checks. A conviction for child abuse or neglect or other selected serious felonies is grounds for revocation of a teaching certificate. Prosecutors must notify the Commissioner of Education if a licensed educator is convicted of a felony. School teachers and officials who have reasonable cause to suspect a child has been abused or neglected must notify the Commissioner of Children and Families or law enforcement. Fingerprint-based FBI and state criminal history checks are required of all public school employees, school bus drivers, and public school student teachers. A felony or child-victim conviction disqualifies an applicant from public school employment or a school bus license. Public and charter schools must report to the DE Secretary of Education when a licensed educator is dismissed, resigns, or retires following allegations of misconduct. Any school employee who knows or in good faith suspects child abuse or neglect shall notify the Department of Services for Children, Youth and Their Families. Periodic fingerprint-based local and FBI criminal history checks are required of all employees and volunteers in city organizations that provide services to children. None located. School teachers and officials who know or have reasonable cause to suspect child abuse or neglect must notify law enforcement or the Child and Family Services Agency. Periodic (every 5 years) fingerprint-based FBI and state criminal history checks are required of all public school teachers and employees and contractual employees who have direct contact with students or are permitted access to school grounds when students are present. Private school owners or operators are required to undergo a fingerprint-based criminal history check and are authorized to conduct such a check for their employees; if the private school accepts state scholarships, such checks are mandatory. Individuals with a felony conviction or a misdemeanor conviction involving a child are prohibited from employment in a public school or a private school that accepts state scholarships if they will have contact with children. Owners or operators of private schools may not have been convicted of a felony involving moral turpitude. Public, charter schools, and private schools that accept state scholarships must notify the FL Department of Education after receipt of allegations of misconduct against a licensed educator. School personnel must notify the state hotline if they know or have reasonable cause to suspect child abuse. Fingerprint-based state and federal criminal history checks are required of all certified teachers and all public school employees. None located. Superintendents must report to the county board of education when an educator commits a sexual offense. School teachers and administrators with reasonable cause to believe that a child is a victim of abuse must notify a child welfare agency. Fingerprint-based FBI and state criminal history checks are required of all public and private school employees whose position places them in close proximity to children. Conviction of a sexual offense is grounds for permanent revocation of a teaching license. Employees or officers of any public or private school must notify law enforcement or the HI Department of Human Services if they have reason to believe that child abuse or neglect has occurred. Fingerprint-based FBI and state criminal background and sex offender registry checks are required of all certified teachers, and public school employees with unsupervised contact with children. Private schools are authorized to conduct such checks of their employees. Convicted felons may not receive a teaching certificate. School districts must notify the ID State Department of Education when an educator is dismissed or resigns for a reason that could constitute grounds for certificate revocation. Teachers with reason to believe that a child has been abused or neglected must notify law enforcement or the ID Department of Health & Welfare. Fingerprint-based FBI and state criminal background and sex offender registry checks are required of all public school employees and employees of contractors (including school bus operators). In order to obtain state recognition, a private school must conduct such checks on its employees. Felons convicted of sexual or physical abuse of a minor may not be employed by a public school. Felons are ineligible for a school bus license. Superintendents must notify the State Superintendent of Education when any licensed educator is dismissed or resigns as a result of child abuse or neglect. School administrators and employees must notify the Department of Children & Family Services if they have reasonable cause to believe a child is abused or neglected. Fingerprint-based FBI and state criminal background and sex offender registry checks are required of all public, charter, and accredited private school employees and contractor employees. Schools may not employ or contract with individuals convicted of violent or sexual felonies. Public and private school employees who have reason to believe that a child has been abused or neglected must report the incident to law enforcement or the Department of Child Services. Fingerprint-based FBI and state criminal background and sex offender registry checks are required of all public school teachers. Conviction of a crime related to the teaching profession is grounds for revocation of a teaching license. Conviction of sex with a minor disqualifies an individual from holding a school bus license. Public and private schools must report to the state education board if a licensed educator is terminated or resigns as a result of alleged or actual misconduct. Licensed educators must report child abuse to the IA Department of Human Services. None located. Conviction of a violent, sexual, or child-victim offense disqualifies a teacher from receiving or renewing a teaching certificate. School employees who have reason to suspect that a child has been harmed as a result of physical, mental, or emotional abuse or neglect or sexual abuse must notify the Department of Social & Rehabilitation Services. Fingerprint-based FBI and state criminal history checks are required of all public school teachers, student teachers, and employees. Public schools are authorized to conduct such a check of contractor employees, volunteers, and visitors. Private schools are authorized to conduct such checks of their employees. Public schools may not employ individuals convicted of a sex offense felony. Principals must report all sexual offenses that occur on school property to law enforcement. School personnel who have reasonable cause to believe that a child is neglected or abused must notify law enforcement. Fingerprint-based FBI and state criminal history checks are required of all public school employees and contractor employees. Public and private schools may not employ or contract with an entity that employs individuals convicted of a violent or sexual felony if such individuals will have contact with students. Public or private school personnel who have cause to believe that a child’s physical or mental health or welfare is endangered as a result of abuse or neglect must notify law enforcement or child protective services. Fingerprint-based national and state criminal history checks are required of all public school employees. None located. Teachers and school officials who know or have reasonable cause to suspect that a child has been abused or neglected must notify the district attorney. Fingerprint-based national and state criminal history checks are required of all public and private school employees. Public or private schools may not employ an individual convicted of a violent felony or of child sexual abuse. All educators who have reason to believe that a child has been subjected to abuse must notify law enforcement or the Department of Human Resources. Criminal history checks are required every 3 years of all public and accredited private school employees, volunteers, bus drivers, and contractor employees. None located. Public and private school teachers and administrators who have reasonable cause to believe that a child is suffering physical or emotional injury resulting from abuse must notify the Department of Social Services. Fingerprint-based FBI and state criminal background and previous employer checks are required of all public and private school employees, contractor employees, and bus drivers. Convicted sex offenders may not be employed in public or private schools. School teachers and administrators must report suspected child abuse or neglect to the Department of Human Services. State criminal history checks are required of all public employees and volunteers. Public schools are authorized to conduct such checks on independent contractors. School bus licenses require a criminal history check. Public schools may not employ an individual with a conviction for a violent or sexual felony. School boards must report to the state Board of Teaching when a teacher or administrator is dismissed or resigns as a result of commission of a felony or immoral conduct. Educational professionals who know or have reason to believe a child is being neglected or physically or sexually abused must notify law enforcement or the local welfare agency. Fingerprint-based FBI and state criminal history checks are required of all public school employees and substitute teachers. Public schools may not employ individuals convicted of a violent or sexual felony or of child abuse. Superintendents must notify law enforcement of all crimes that occur on school property. Public and private school employees who have reasonable cause to suspect a child is abused or neglected must notify the Department of Human Services. Fingerprint-based FBI and state criminal history checks are required of all certified teachers and public school employees and bus drivers. None located. Principals must report all sexual assaults to law enforcement. School teachers and officials who have reasonable cause to suspect that a child has been subjected to abuse or neglect must notify the Department of Social Services. None located. None located. School districts must report the dismissal or resignation of teachers and administrators resulting from a felony conviction or immoral conduct to the state Superintendent of Public Instruction. School employees who have reasonable cause to suspect child abuse or neglect must notify the Department of Public Health and Human Services. Fingerprint-based FBI and state criminal history checks are required of all certified teachers and administrators if they have not been state residents for the previous 5 years. None located. School employees who have reasonable cause to suspect a child has been subject to abuse or neglect must notify law enforcement or the NE Department of Health & Human Services. Fingerprint-based FBI and state criminal history checks are required of all licensed teachers and public and charter school employees. Convicted felons or offenders with convictions involving moral turpitude are ineligible for a teaching license. Teachers who know or have reasonable cause to believe that a child has been abused or neglected must notify law enforcement or a child welfare agency. Fingerprint-based FBI and state criminal history checks are required of all public and charter school employees, volunteers, and contractor employees. Private schools are authorized to conduct such checks of their employees. Criminal history checks are required of all applicants for a school bus license. Public schools may not employ felons convicted of murder, sexual assault, child pornography, or kidnapping. School teachers and officials who suspect child abuse or neglect must notify the NH Department of Health & Human Services. Fingerprint-based FBI and state criminal history checks are required of all public school employees and selected contractor employees (including school bus drivers). Public schools are also authorized to conduct such checks on volunteers with regular contact with children. Private schools are authorized to conduct such background checks of their employees and contractor employees. Public schools may not employ or use as contractor employees persons convicted of a felony in the first or second degree. Any person with reasonable cause to believe that a child has been subjected to child abuse must notify the Division of Youth and Family Services. Fingerprint-based FBI criminal history checks are required of all licensed teachers and public school employees and contractor employees. None located. Superintendents must notify the NM Department of Education when a licensed educator is dismissed or resigns resulting from allegations of misconduct. School teachers and officials who know or reasonably suspect child abuse or neglect must notify law enforcement or the Children, Youth & Families Department. Fingerprint-based FBI and state criminal history checks and background checks are required of all certified teachers and public school employees. Private schools are authorized to conduct such background checks of their employees and volunteers. Public schools may not employ registered sex offenders. Felons convicted of certain violent or sexual offenses may not hold a school bus driver license. School administrators or superintendents must notify law enforcement of allegations of child abuse in an educational setting. School personnel with reasonable cause to suspect child abuse must notify the Office of Children & Family Services. Public school districts are required to have a policy determining which employees and contract employees are subject to fingerprint-based FBI and state criminal history checks. None located. Principals must report any sexual offenses occurring on school property to law enforcement, and school boards must notify the parents of such victims. Anyone who has cause to suspect child abuse or neglect must notify the Department of Social Services. Fingerprint-based national and state criminal history checks are required of all licensed teachers, school counselors, and public and private school employees with unsupervised contact with children. None located. School teachers and administrators with reasonable cause to suspect that a child is abused or neglected must notify the Department of Human Services. Fingerprint-based FBI and state criminal history checks are required of all licensed educators, preschool employees, public school contractor employees with unsupervised access to children, and school bus license holders. Such checks are required of all public or charter school employees every 5 years; however, the FBI check is not required if the employee has been a resident of Ohio for the past 5 years. Public and charter schools, school bus operators, and preschools may not employ a person convicted of a violent or sexual offense. Conviction of a felony is grounds for revocation of an educator licensing. School teachers and employees who have reasonable cause to suspect child abuse or neglect must notify law enforcement or a public children services agency. Fingerprint-based national and state criminal history checks are required of all licensed teachers. Public school districts are required to implement a criminal history check policy for all employees. None located. Any person who has reason to believe a child is a victim of abuse or neglect must notify the state hotline. Fingerprint-based national and state criminal history checks are required of all licensed teachers. Public schools are required to conduct such checks of their employees and contractors. Private schools are authorized to conduct such checks. Public and private schools are authorized to conduct state criminal checks on their volunteers with direct, unsupervised contact with children. Felons convicted of certain violent or sexual offenses may not hold a teaching license. School employees who have reasonable cause to suspect child abuse must notify law enforcement or the Department of Human Services. School boards must adopt policies requiring employees to report such abuse. State criminal history checks are required of all public and private school employees and contractor employees who have direct contact with children. If the individual has not been a state resident for at least 2 years prior, then a fingerprint-based FBI criminal history check is required. Public schools are required to conduct an abuse registry check on all new employees. Public and private schools may not employ felons convicted of certain violent or sexual offenses within the past 5 years or persons listed as a perpetrator of child abuse. Superintendents must report information which constitutes reasonable cause to believe that a licensed educator has committed sexual abuse to the PA Department of Education. School administrators and teachers who have reasonable cause to suspect child abuse must notify the Department of Public Welfare. Fingerprint-based national and state criminal history checks are required of all public and private school employees. None located. Any person who has reasonable cause to suspect child abuse must notify the Department of Children, Youth & Families. State name-based criminal history checks and national sex offender registry checks are required of all public school employees, volunteers, and school bus drivers. Public schools may not hire anyone convicted of a violent crime. Any crimes committed in a school must be reported to law enforcement. Teachers and principals who have reason to believe a child has been abused or neglected must notify law enforcement. Fingerprint-based national and state criminal history checks are required of all public school employees. Public schools may not hire or contract with felons convicted of violent, drug, or sexual offenses. Public and private school teachers and officials who have reasonable cause to suspect child abuse or neglect must notify their principal or superintendent who must notify law enforcement, a state’s attorney, or the Department of Social Services. Fingerprint-based national and state criminal history checks are required of all public school teachers and employees or contractual employees in positions requiring proximity to children. Criminal history checks are required of all school bus license holders. Felons convicted of certain violent or sexual offenses may not hold a teaching license. Sexual offenders may not come into direct contact with children. School personnel who have reasonable cause to suspect child abuse must report it to a juvenile judge, law enforcement, or the Department of Children’s Services. If the abuse occurred on school grounds, then the parents of the victim must also be given notice. National criminal history checks are required of all certified educators, public school employees and contractor employees (with direct contact with students), student teachers, volunteers, substitute teachers, and bus drivers. Private schools are authorized to conduct such checks of their employees, volunteers, and contractor employees. Public schools may not hire persons or use contractor employees with felony or sex offender convictions. Bus driver operators may not employ individuals with felony or misdemeanor (involving moral turpitude) convictions. Superintendents must notify the State Board for Educator Certification if an educator is terminated for abusing a student. Principals must notify law enforcement when a felony is committed on school property. Teachers who have cause to believe that a child’s physical or mental health or welfare has been adversely affected by abuse or neglect must notify a state agency or law enforcement. Fingerprint-based criminal history checks are required of all public school employees and volunteers and employees and volunteers of private schools that accept state scholarships. Other private schools are authorized to conduct criminal history checks of their employees. None located. Any person who has reason to believe that a child has been subjected to abuse or neglect must notify law enforcement or the Division of Child and Family Services. Fingerprint-based FBI and state criminal background and abuse registry checks are required of all licensed educators and public and independent school employees and contractor employees. Sex offenders are ineligible for public or independent school employment. Any person who has reasonable cause to believe that a licensed educator has engaged in unprofessional conduct must notify the VT Department of Education. School district employees, teachers, or principals who have reasonable cause to believe that any child has been abused or neglected must notify the Department for Children & Families. Fingerprint-based national and state criminal history checks are required of all public school employees and accredited private school employees. Persons convicted of sexual molestation, sexual abuse, or rape of a child are ineligible for public school employment or employment with a contractor who provides services to public schools. Persons found to be a perpetrator of child abuse are ineligible for public school employment. School boards must notify the Board of Education when licensed educators are dismissed or resign as a result of a sexual offense. Public and private school employees who have reason to suspect child abuse or neglect must notify the local child-protective services unit or a state hotline. Fingerprint-based national and state criminal history checks are required of all public school employees, volunteers, and contractor employees. Conviction of a felony against a child is grounds for permanent revocation of a teaching certificate. Public school employees who have reasonable cause to believe that a student is the victim of sexual misconduct by a school employee must notify the school’s administrator, who must notify law enforcement. Professional school personnel who have reasonable cause to believe that a child has suffered abuse or neglect must notify law enforcement or the Department of Social & Health Services. Fingerprint-based FBI and state criminal history checks are required of all licensed teachers. School bus drivers are also subject to criminal history checks. None located. School teachers and personnel who have reasonable cause to suspect sexual abuse of a child must notify law enforcement and the Department of Health and Human Resources. State criminal history checks are required of all licensed teachers. Fingerprint-based FBI criminal history checks are required of all teacher license applicants who have not been state residents. A state criminal history check is required of all school bus license applicants, and a federal background check is required if the applicant had not resided in the state at any time during the preceding 2 years. Individuals convicted of violent or child-victim crimes during the past 6 years are ineligible for a teacher license. Individuals convicted of violent or sexual or child-victim crimes during the past 5 years are ineligible for a school bus license. School administrators must report to the State Superintendent if a licensed educator is charged with a sexual offense or is dismissed or resigns as a result of immoral conduct. School teachers and administrators who have reasonable cause to suspect that a child has been abused or neglected must notify law enforcement or the local child welfare agency. Fingerprint-based national and state criminal history checks are required of all certified teachers and public school employees with access to minors. None located. School boards must notify the state teaching board if a licensed educator is dismissed or resigns as a result of a felony conviction. Any person who has reasonable cause to believe or suspect that a child has been abused or neglected must notify the child protective agency or law enforcement.
Prior GAO testimonies have described cases of physical abuse of children at youth residential treatment programs and public and private schools. However, children are also vulnerable to sexual abuse. A 2004 Department of Education report estimated that millions of students are subjected to sexual misconduct by a school employee at some time between kindergarten and the twelfth grade (K-12). GAO was asked to (1) examine the circumstances surrounding cases where K-12 schools hired or retained individuals with histories of sexual misconduct and determine the factors contributing to such employment actions and (2) provide an overview of selected federal and state laws related to the employment of convicted sex offenders in K-12 schools. To identify case studies, GAO compared 2007 to 2009 data employment databases from 19 states and the District of Columbia to data in the National Sex Offender Registry. GAO also searched public records from 2000 to 2010 to identify cases in which sexual misconduct by school employees ultimately resulted in a criminal conviction. GAO ultimately selected 15 cases from 11 states for further investigation. For each case, to the extent possible, GAO reviewed court documents and personnel files and also interviewed relevant school officials and law enforcement. GAO reviewed applicable federal and state laws related to the employment of sex offenders and requirements for conducting criminal history checks. The 15 cases GAO examined show that individuals with histories of sexual misconduct were hired or retained by public and private schools as teachers, support staff, volunteers, and contractors. At least 11 of these 15 cases involve offenders who previously targeted children. Even more disturbing, in at least 6 cases, offenders used their new positions as school employees or volunteers to abuse more children. GAO found that the following factors contributed to hiring or retention: (1) school officials allowed teachers who had engaged in sexual misconduct toward students to resign rather than face disciplinary action, often providing subsequent employers with positive references; (2) schools did not perform preemployment criminal history checks; (3) even if schools did perform these checks, they may have been inadequate in that they were not national, fingerprint-based, or recurring; and (4) schools failed to inquire into troubling information regarding criminal histories on employment applications. GAO found no federal laws regulating the employment of sex offenders in public or private schools and widely divergent laws at the state level. For example, some states require a national, fingerprint-based criminal history check for school employment, while others do not. State laws also vary as to whether past convictions must result in termination from school employment, revocation of a teaching license, or refusal to hire.
For this 2005 high-risk update, we determined that three high-risk areas warranted removal from the list because of progress made. They are the Department of Education’s (Education) Student Financial Aid Programs, Federal Aviation Administration (FAA) Financial Management, and the Department of Agriculture’s (USDA) Forest Service Financial Management. We will, however, continue to monitor these programs, as appropriate, to ensure that the improvements we have noted are sustained. In 1990, we designated student financial aid programs as high risk. Since then, in intervening high-risk updates, we reported various problems, including poor financial management and weak internal controls, fragmented and inefficient information systems, and inadequate attention to program integrity as evidenced by high default rates and the numbers of ineligible students participating in the programs. In 1998, the Congress established Education’s Office of Federal Student Aid (FSA) as the government’s first performance-based organization, thus giving it greater flexibility to better address long-standing management weaknesses within student aid programs. In 2001, Education created a team of senior managers dedicated to addressing key financial and management problems throughout the agency, and in 2002, the Secretary of Education made removal from GAO’s high-risk list a specific goal and listed it as a performance measure in Education’s strategic plan. We reported in 2003 that Education had made important progress, but that it was too early to determine whether improvements would be sustained and that additional steps needed to be taken in several areas. Since 2003, Education has sustained improvements in the financial management of student financial aid programs and taken additional steps to address our concerns about systems integration, reporting on defaulted loans, and human capital management. Furthermore, the agency has met many of our criteria for removing the high-risk designation. Education has demonstrated a strong commitment to addressing risks; developed and implemented corrective action plans; and, through its annual planning and reporting processes, monitored the effectiveness and sustainability of its corrective measures. Thus, while FSA needs to continue its progress and take additional steps to fully address some of our recommendations, we are removing the high-risk designation from student financial aid programs. FSA has sustained improvements to address its financial management and internal control weaknesses. FSA received an unqualified, or “clean,” opinion on its financial statements for fiscal years 2002, 2003, and 2004. In addition, the auditors indicated progress in addressing previously identified internal control weaknesses, with no material weaknesses reported in FSA’s fiscal year 2003 and 2004 audits. However, the auditors reported that FSA should continue to further strengthen these internal controls, which are related to the calculation and reporting of the loan liability activity and subsidy estimates, as well as its information systems controls. FSA has also established processes to address several previously reported internal control weaknesses that made FSA vulnerable to improper payments in its grant and loan programs. For example, FSA has taken steps to better ensure that grants are not awarded to ineligible students and has implemented a process to identify and investigate schools for possible fraudulent activities or eligibility-related violations. Further, FSA addressed concerns we raised about students who were underreporting family income, by working with OMB and the Department of the Treasury to draft legislation that would permit use of tax information to verify income reported on student aid applications. FSA has taken further actions toward integrating its many disparate information systems. FSA has developed an integration strategy that focuses on achieving a seamless information exchange environment whereby users—students, educational institutions, and lenders—would benefit from simplified access to the agency’s financial aid processes and more consistent and accurate data across its programs. FSA also has made progress toward establishing an enterprise architecture for guiding its systems integration efforts and has begun three efforts for reengineering its information-processing environment, which would consolidate and integrate most of its systems and move it closer to a seamless information exchange environment. FSA also included action steps for achieving student loan default management goals in its annual plan and has taken steps to help reduce the default rate. In 2003, FSA created a work group that identified over 60 default prevention and management initiatives and established a new organizational unit to focus on mitigating and reducing the risk of loss to the taxpayer from student obligations. FSA added information to its exit counseling guide to help increase borrowers’ awareness of the benefits of repaying their loans through electronic debiting accounts and prepayment options. In 2003, FSA reported a cohort default rate of 5.4 percent for 2001, and defaulted loans as a percentage of total outstanding loans declined from 9.4 percent in 2001 to 7.6 percent in 2003. FSA is taking steps to address its human capital challenges. It developed a comprehensive human capital strategy that includes many of the practices of leading organizations and has addressed many of the issues we previously raised. For example, FSA identified challenges that it will likely face in coming years, such as likely retirements, and discussed recognized weaknesses, such as the need to develop the skills of staff and maintain the focus of the agency’s leadership on human capital issues. FSA has also prepared a succession plan that addresses some of our concerns about the pending retirement of senior employees in key positions across the agency. Additionally, FSA has established several approaches to support staff development by revising its Skills Catalog, which should enable staff to independently plan their professional development; introducing online learning tools; offering a wide variety of internal courses; and providing funds for external courses. FAA Financial Management We first designated FAA financial management as high risk in 1999 because the agency lacked accountability for billions of dollars in assets and expenditures due to serious weaknesses in its financial reporting, property, and cost accounting systems. These problems continued through fiscal year 2001, when FAA’s financial management system required 850 adjustments totaling $41 billion in order to prepare FAA’s annual financial statements. In addition, at that time, FAA could not accurately and routinely account for property totaling a reported $11.7 billion, and lacked the cost information necessary for decision making as well as to adequately account for its activities and major projects, such as the air traffic control modernization program. Also, while FAA received an unqualified audit opinion on its fiscal year 2001 financial statements, the auditor’s report cited a material internal control weakness related to FAA’s lack of accountability for its property and several other internal control weaknesses related to financial management issues. At the time of our January 2003 high-risk report, FAA had made significant progress in addressing its financial management weaknesses, most importantly through ongoing efforts to develop a new financial management system called Delphi, including an integrated property accounting system, as well as initiatives to develop a new cost accounting system. However, these new systems were still under development and not yet operational. Therefore, it had yet to be seen whether the new systems would resolve the long-standing financial management issues that had resulted in our designation of FAA financial management as high risk. As a result, we retained FAA financial management as a high-risk area, while noting that significant progress was being made. FAA management has continued to make progress since our January 2003 high-risk report. Subsequent auditors’ reports on FAA’s financial statements for fiscal years 2002 and 2003 were unqualified, but continued to cite internal control weaknesses, although less severe than in prior years, related to FAA’s then existing financial management systems. In fiscal year 2004, FAA implemented its new Delphi general ledger system, including an integrated property accounting system. FAA management was able to prepare financial statements for the fiscal year ended September 30, 2004, using these new systems, and FAA’s auditors gave FAA an unqualified opinion on these financial statements. While the auditors reported several internal control weaknesses related to the implementation of the new financial management systems, none of these were considered to be material weaknesses, and FAA management, in responding to the auditor’s report, indicated their full commitment to addressing these issues. While the cost accounting system is still under development, progress has been made. The cost accounting interface with Delphi was completed in fiscal year 2004, and the labor distribution interface is expected to be completed in fiscal year 2005. For the first time, some cost accounting data, while not available on a monthly basis, were available shortly after fiscal- year end for the 12 months ended September 30, 2004. FAA management has demonstrated its commitment to the full implementation of this system, devoting significant planning and resources to its completion and the monitoring of its implementation progress. While it is important that FAA management continue to place a high priority on the cost system and, more importantly, ultimately use cost information routinely in FAA decision making, FAA’s progress in improving financial management overall since our January 2003 high-risk update has been sufficient for us to remove the high-risk designation for FAA financial management. We first designated USDA’s Forest Service financial management as high risk in 1999 because the agency lacked accountability over billions of dollars in its two major assets—fund balance with the Department of the Treasury (Treasury) and property, plant, and equipment. Since the Forest Service is a major component of USDA, the lack of accountability over these two major assets contributed to disclaimers of opinions on USDA’s consolidated financial statements. In addition, the Forest Service continued to have material weaknesses in its accounting and reporting of accounts receivable and accounts payable. This precluded the agency from knowing costs it had incurred and amounts owed to others throughout the year. These problems were further exacerbated by problems with the Forest Service’s partial implementation of its new financial accounting system. This system was unable to produce certain critical budgetary and accounting reports that track obligations, assets, liabilities, revenues, and costs. Thus, these financial reporting weaknesses hampered management’s ability to effectively manage operations, monitor revenue and spending levels, and make informed decisions about future funding needs. The Forest Service’s long-standing financial management deficiencies were also evident in the repeated negative opinions on its financial statements, including adverse opinions in fiscal years 1991, 1992, and 1995. Due to the severity of its accounting and reporting deficiencies, the Forest Service did not prepare financial statements for fiscal year 1996, but chose instead to focus on trying to resolve these problems. However, the Forest Service’s pervasive material internal control weaknesses continued to plague the agency. In our 2001 high-risk update, we reported that the USDA Office of Inspector General was unable to determine the accuracy of the Forest Service’s reported $3.1 billion in net property, plant, and equipment, which represented 51 percent of the agency’s assets. We also reported that the inspector general was unable to verify fund balances with Treasury totaling $2.6 billion because the reconciliation of agency records with Treasury records had not been completed. Because of the severity of these and other deficiencies, the inspector general disclaimed from issuing opinions on the Forest Service’s financial statements for fiscal years 1997 through 2001. In addition, we noted that the Forest Service’s autonomous field structure hampered efforts to correct these accounting and financial reporting deficiencies. We also reported that the Forest Service had implemented its new accounting system agencywide. However, the system depended on and received data from feeder systems that were poorly documented, operationally complex, deficient in appropriate control processes, and costly to maintain. In our 2003 high-risk report, while we highlighted that the Forest Service continued to have long-standing material control weaknesses, including weaknesses in its fund balance with Treasury and in property, plant, and equipment, we reported that the Forest Service had made progress toward achieving accountability by receiving its first unqualified opinion on its fiscal year 2002 financial statements. Although the Forest Service had reached an important milestone, it had not yet proved it could sustain this outcome, and had not reached the end goal of routinely producing timely, accurate, and useful financial information. As a result, we retained Forest Service financial management as a high-risk area. In the past 2 years, the Forest Service has made additional progress, especially with respect to addressing several long-standing material internal control deficiencies. Based on our criteria for removing a high-risk designation, which includes a demonstrated strong commitment, corrective action plan, and progress in addressing deficiencies, we believe the Forest Service’s overall improvement in financial management since our January 2003 high-risk update has been sufficient for us to remove Forest Service financial management from the high-risk list at this time. The Forest Service has resolved material deficiencies related to its fund balance with Treasury and in property, plant, and equipment, thus increasing accountability over its billions of dollars in assets, and USDA and the Forest Service received unqualified opinions on their fiscal year 2004 financial statements. This does not mean that the Forest Service has no remaining challenges. For example, while we recognized its clean opinion for fiscal year 2002 in our last update, subsequently, in fiscal year 2003, these financial statements had to be restated to correct material errors. The Forest Service also received a clean opinion for fiscal year 2003, but these financial statements had to be restated in fiscal year 2004 to again correct material misstatements. Frequent restatements to correct errors can undermine public trust and confidence in both the entity and all responsible parties. Further, the Forest Service continues to have material internal control weaknesses related to financial reporting and information technology security, and its financial management systems do not yet substantially comply with the Federal Financial Management Improvement Act of 1996. However, the Forest Service has demonstrated a strong commitment to efforts under way or planned, that, if effectively implemented, should help to resolve many of its remaining financial management problems and move it toward sustainable financial management business processes. These efforts are designed to address internal control and noncompliance issues identified in audit reports, as well as organizational issues. For example, during fiscal year 2004, the Forest Service began reengineering and consolidating its finance, accounting, and budget processes. We believe these efforts, if implemented effectively, will provide stronger financial management, sustain positive audit results, and ensure compliance with federal financial reporting standards. Yet, it is important that USDA and Forest Service officials continue to place a high priority on addressing the Forest Service’s remaining financial management problems, and we will continue to monitor its progress. Our use of the high-risk designation to draw attention to the challenges associated with the economy, efficiency, and effectiveness of government programs and operations in need of broad-based transformation has led to important progress. We will also continue to identify high-risk areas based on the more traditional focus on fraud, waste, abuse, and mismanagement. Overall, our focus will continue to be on identifying the root causes behind vulnerabilities, as well as actions needed on the part of the agencies involved and, if appropriate, the Congress. For 2005, we have designated the following four new areas as high risk: Establishing Appropriate and Effective Information-Sharing Mechanisms to Improve Homeland Security, Department of Defense (DOD) Approach to Business Transformation, DOD Personnel Security Clearance Program, and Management of Interagency Contracting. Information is a crucial tool in fighting terrorism, and the timely dissemination of that information to the appropriate government agency is absolutely critical to maintaining the security of our nation. The ability to share security-related information can unify the efforts of federal, state, and local government agencies, as well as the private sector as appropriate, in preventing or minimizing terrorist attacks. The 9/11 terrorist attacks heightened the need for comprehensive information sharing. Prior to that time, the overall management of information-sharing activities among government agencies and between the public and private sectors lacked priority, proper organization, coordination, and facilitation. As a result, the existing national mechanisms for collecting threat information, conducting risk analyses, and disseminating warnings were at an inadequate state of development for protecting the United States from coordinated terrorist attacks. Information sharing for securing the homeland is a governmentwide effort involving multiple federal agencies, including but not limited to the Office of Management and Budget (OMB); the Departments of Homeland Security (DHS), Justice, State, and Defense; and the Central Intelligence Agency. Over the past several years, GAO has identified potential information- sharing barriers, critical success factors, and other key management issues that should be considered, including the processes, procedures, and systems to facilitate information sharing among and between government entities and the private sector. Establishing an effective two-way exchange of information to detect, prevent, and mitigate potential terrorist attacks requires an extraordinary level of cooperation and perseverance among federal, state, and local governments and the private sector to establish timely, effective, and useful communications. Since 1998, GAO has recommended the development of a comprehensive plan for information sharing to support critical infrastructure protection efforts. The key components of this recommendation can be applied to broader homeland security and intelligence-sharing efforts, including clearly delineating the roles and responsibilities of federal and nonfederal entities, defining interim objectives and milestones, setting time frames for achieving objectives, and establishing performance measures. We have made numerous recommendations related to information sharing, particularly as they relate to fulfilling federal critical infrastructure protection responsibilities. For example, we have reported on the practices of organizations that successfully share sensitive or time-critical information, including establishing trust relationships, developing information-sharing standards and protocols, establishing secure communications mechanisms, and disseminating sensitive information appropriately. Federal agencies have concurred with our recommendations that they develop appropriate strategies to address the many potential barriers to information sharing. However, many federal efforts remain in the planning or early implementation stages. In the absence of comprehensive information-sharing plans, many aspects of homeland security information sharing remain ineffective and fragmented. Accordingly, we are designating information sharing for homeland security as a governmentwide high-risk area because this area, while receiving increased attention, still faces significant challenges. Since 2002, legislation, various national strategies, and executive orders have specified actions to improve information sharing for homeland security. Earlier this month, DHS released an Interim National Infrastructure Protection Plan (NIPP), which addresses some of the key issues that GAO has previously identified. The DHS plan is intended to provide a consistent, unifying structure for integrating critical infrastructure protection (CIP) efforts into a national program. The interim NIPP identifies key stakeholders and participants in information sharing efforts related to public-private efforts to protect critical infrastructure. In addition, the plan recognizes that information sharing systems can be broadly defined as interactions of people, physical structures, information, and technologies that are designed to ensure that critical, high-quality, and productive knowledge is available to decision makers whenever and wherever it is needed. Further, the plan identifies key responsibilities for DHS, including the development, implementation, and expansion of information-sharing strategies to support infrastructure protection efforts. The interim plan released by DHS is an important step toward improving information sharing for infrastructure protection efforts; however, extraordinary challenges remain. As the 9/11 Commission recognized, information sharing must be “guided by a set of practical policy guidelines that simultaneously empower and constrain officials, telling them clearly what is and is not permitted.” While the wide range of executive and legislative branch actions is encouraging, significant challenges remain in developing the required detailed policies, procedures, and plans for sharing homeland security-related information. For example, the Homeland Security Information Sharing Act required procedures for facilitating homeland security information sharing and established authorities to share different types of information, such as grand jury information; electronic, wire, and oral interception information; and foreign intelligence information. In July 2003, the President assigned these functions to the Secretary of Homeland Security, but no deadline was established for developing information-sharing procedures. Without clear processes and procedures for rapidly sharing appropriate information, the ability of private sector entities to effectively design facility security systems and protocols can be impeded. In addition, the lack of sharing procedures can also limit the federal government’s accurate assessment of nonfederal facilities’ vulnerability to terrorist attacks. In December 2004, the Intelligence Reform and Terrorism Prevention Act of 2004 (P.L. 108-458) required the establishment of (1) an information-sharing environment (ISE) as a means of facilitating the exchange of terrorism information among appropriate federal, state, local, and tribal entities, and the private sector; and (2) an information-sharing council to support the President and the ISE program manager with advice on developing policies, procedures, guidelines, roles, and standards necessary to implement and maintain the ISE. It will be important to ensure that the DHS information-sharing systems are coordinated with those required under the intelligence reform legislation. Improving the standardization and consolidation of data can also promote better sharing. For example, in 2003 we found that goals, objectives, roles, responsibilities, and mechanisms for information sharing had not been consistently defined by the 9 federal agencies that maintain 12 key terrorist and criminal watch list systems. As a result, efforts to standardize and consolidate appropriate watch list data would be impeded by the existence of overlapping sets of data, inconsistent agency policies and procedures for the sharing of those data, and technical incompatibilities among the various watch list information systems. In addition, 2004 reports from the inspectors general at DHS and the Department of Justice highlight the challenges and slow pace of integrating and sharing information between fingerprint databases. A great deal of work remains to effectively implement the many actions called for to improve homeland security information sharing, including establishing clear goals, objectives, and expectations for the many participants in information-sharing efforts; and consolidating, standardizing, and enhancing federal structures, policies, and capabilities for the analysis and dissemination of information. DOD spends billions of dollars each year to sustain key business operations that support our forces, including, for example, systems and processes related to human capital policies and practices, acquisition and contract management, financial management, supply chain management, business systems modernization, and support infrastructure management—all of which appear on GAO’s high-risk list. Recent and ongoing military operations in Afghanistan and Iraq and new homeland defense missions have led to newer and higher demands on our forces in a time of growing fiscal challenges for our nation. In an effort to better manage DOD’s resources, the Secretary of Defense has appropriately placed a high priority on transforming force capabilities and key business processes. For years, we have reported on inefficiencies and the lack of adequate transparency and appropriate accountability across DOD’s major business areas, resulting in billions of dollars of wasted resources annually. Although the Secretary of Defense and senior leaders have shown commitment to business transformation, as evidenced by individual key initiatives related to acquisition reform, business modernization, and financial management, among others, little tangible evidence of actual improvement has been seen in DOD’s business operations to date. Improvements have generally been limited to specific business process areas, such as DOD’s purchase card program, and have resulted in the incorporation of many key elements of reform, such as increased management oversight and monitoring and results-oriented performance measures. However, DOD has not taken the steps it needs to take to achieve and sustain business reform on a broad, strategic, departmentwide, and integrated basis. Among other things, it has not established clear and specific management responsibility, accountability, and control over overall business transformation-related activities and applicable resources. In addition, DOD has not developed a clear strategic and integrated plan for business transformation with specific goals, measures, and accountability mechanisms to monitor progress, or a well-defined blueprint, commonly called an enterprise architecture, to guide and constrain implementation of such a plan. For these reasons, we, for the first time, are designating DOD’s lack of an integrated strategic planning approach to business transformation as high risk. DOD’s current and historical approach to business transformation has not proven effective in achieving meaningful and sustainable progress in a timely manner. As a result, change is necessary in order to expedite the effort and increase the likelihood of success. For DOD to successfully transform its business operations, it will need a comprehensive and integrated business transformation plan; people with needed skills, knowledge, experience, responsibility, and authority to implement the plan; an effective process and related tools; and results-oriented performance measures that link institutional, unit, and individual performance goals and expectations to promote accountability for results. Over the last 3 years, we have made several recommendations that, if implemented effectively, could help DOD move forward in establishing the means to successfully address the challenges it faces in transforming its business operations. For example, we believe that DOD needs a full-time chief management officer (CMO) position, created through legislation, with responsibility, authority, and accountability for DOD’s overall business transformation efforts. This is a “good government” matter that should be addressed in a professional and nonpartisan manner. The CMO must be a person with significant authority and experience who would report directly to the Secretary of Defense. Given the nature and complexity of the overall business transformation effort, and the need for sustained attention over a significant period of time, this position should be a term appointment (e.g., 7 years), and the incumbent should be subject to a performance contract. DOD has agreed with many of our recommendations and launched efforts intended to implement many of them, but progress to date has been slow. In my view, it will take the sustained efforts of a CMO, as we have proposed, to make the needed progress in transforming DOD’s business operations. Delays in completing hundreds of thousands of background investigations and adjudications (a review of investigative information to determine eligibility for a security clearance) have led us to add the DOD personnel security clearance program to our 2005 high-risk list. Personnel security clearances allow individuals to gain access to classified information that, in some cases, could reasonably be expected to cause exceptionally grave damage to national defense or foreign relations through unauthorized disclosure. Worldwide deployments, contact with sensitive equipment, and other security requirements have resulted in DOD’s having approximately 2 million active clearances. Problems with DOD’s personnel security clearance process can have repercussions throughout the government because DOD conducts personnel security investigations and adjudications for industry personnel from 22 other federal agencies, in addition to performing such functions for its own service members, federal civilian employees, and industry personnel. While our work on the clearance process has focused on DOD, clearance delays in other federal agencies suggest that similar impediments and their effects may extend beyond DOD. Since at least the 1990s, we have documented problems with DOD’s personnel security clearance process, particularly problems related to backlogs and the resulting delays in determining clearance eligibility. Since fiscal year 2000, DOD has declared its personnel security clearance investigations program to be a systemic weakness—a weakness that affects more than one DOD component and may jeopardize the department’s operations—under the Federal Managers’ Financial Integrity Act of 1982. An October 2002 House Committee on Government Reform report also recommended including DOD’s adjudicative process as a material weakness. As of September 30, 2003 (the most recent data available), DOD could not estimate the full size of its backlog, but we identified over 350,000 cases exceeding established time frames for determining eligibility. The negative effects of delays in determining security clearance eligibility are serious and vary depending on whether the clearance is being renewed or granted to an individual for the first time. Delays in renewing previously issued clearances can lead to heightened risk of national security breaches because the longer individuals hold a clearance, the more likely they are to be working with critical information and systems. Delays in issuing initial clearances can result in millions of dollars of additional costs to the federal government, longer periods of time needed to complete national security- related contracts, lost-opportunity costs if prospective employees decide to work elsewhere rather than wait to get a clearance, and diminished quality of the work because industrial contractors may be performing government contracts with personnel who have the necessary security clearances but are not the most experienced and best-qualified personnel for the positions involved. DOD has taken steps—such as hiring more adjudicators and authorizing overtime for adjudicative staff—to address the backlog, but a significant shortage of trained federal and private-sector investigative personnel presents a major obstacle to timely completion of cases. Other impediments to eliminating the backlog include the absence of an integrated, comprehensive management plan for addressing a wide variety of problems identified by us and others. In addition to matching adjudicative staff to workloads and working with the Office of Personnel Management (OPM) to develop an overall management plan, DOD needs to develop and use new methods for forecasting clearance needs and monitoring backlogs, eliminate unnecessary limitations on reciprocity (the acceptance of a clearance and access granted by another department, agency, or military service), determine the feasibility of implementing initiatives that could decrease the backlog and delays, and provide better oversight for all aspects of its personnel security clearance process. The National Defense Authorization Act for Fiscal Year 2004 authorized the transfer of DOD’s personnel security investigative function and over 1,800 investigative employees to OPM. The transfer is scheduled to take place this month. While the transfer would eliminate DOD’s responsibility for conducting the investigations, it would not eliminate the shortage of trained investigative personnel needed to address the backlog. Although DOD would retain the responsibility for adjudicating clearances, OPM would be accountable for ensuring that investigations are completed in a timely manner. In recent years, federal agencies have been making a major shift in the way they procure many goods and services. Rather than spending a great deal of time and resources contracting for goods and services themselves, they are making greater use of existing contracts already awarded by other agencies. These contracts are designed to leverage the government’s aggregate buying power and provide a much-needed simplified method for procuring commonly used goods and services. Thus, their popularity is gaining quickly. The General Services Administration (GSA) alone, for example, has seen a nearly tenfold increase in interagency contract sales since 1992, pushing the total sales mark up to $32 billion (see fig. 1). Other agencies, such as the Department of the Treasury and the National Institutes of Health, also sponsor interagency contracts. These contract vehicles offer the benefits of improved efficiency and timeliness; however, they need to be effectively managed. If they are not properly managed, a number of factors can make these interagency contract vehicles high risk in certain circumstances: (1) they are attracting rapid growth of taxpayer dollars; (2) they are being administered and used by some agencies that have limited expertise with this contracting method; and (3) they contribute to a much more complex environment in which accountability has not always been clearly established. Use of these contracts, therefore, demands a higher degree of business acumen and flexibility on the part of the federal acquisition workforce than in the past. This risk is widely recognized, and the Congress and executive branch agencies have taken several steps to address it. However, the challenges associated with these contracts, recent problems related to their management, and the need to ensure that the government effectively implements measures to bolster oversight and control so that it is well positioned to realize the value of these contracts, warrants designation of interagency contracting as a new high-risk area. Interagency contracts are awarded under various authorities and can take many forms. Typically, they are used to provide agencies with commonly used goods and services, such as office supplies or information technology services. Agencies that award and administer interagency contracts usually charge a fee to support their operations. These types of contracts have allowed customer agencies to meet the demands for goods and services at a time when they face growing workloads, declines in the acquisition workforce, and the need for new skill sets. Our work, together with that of some agency inspectors general, has revealed instances of improper use of interagency contracts. For example, we recently reviewed contracts and task orders awarded by DOD and found some task orders under the GSA schedules that did not satisfy legal requirements for competition because the work was not within the scope of the underlying contracts. Similarly, the inspector general for the Department of the Interior found that task orders for interrogators and other intelligence services in Iraq were improperly awarded under a GSA schedule contract for information technology services. More broadly, the GSA inspector general conducted a comprehensive review of the contracting activities of GSA’s Federal Technology Service (FTS), an entity that provides contracting services for agencies across the government, and reported that millions of dollars in fiscal year 2003 awards did not comply with laws and regulations. Administration officials have acknowledged that the management of interagency contracting needs to be improved. Interagency contracting is being used more in conjunction with purchases of services, which have increased significantly over the past several years and now represent over half of federal contract spending. Agencies also are buying more sophisticated or complex services, particularly in the areas of information technology and professional and management support. In many cases, interagency contracts provide agencies with easy access to these services, but purchases of services require different approaches in describing requirements, obtaining competition, and overseeing contractor performance than purchases of goods. In this regard, we and others have reported on the failure to follow prescribed procedures designed to ensure fair prices when using schedule contracts to acquire services. At DOD, the largest customer for interagency contracts, we found that competition requirements were waived for a significant percentage of supply schedule orders we reviewed, frequently based on an expressed preference to retain the services of incumbent contractors. DOD concurred with our recommendations to develop guidance for the conditions under which waivers of competition may be used, require documentation to support waivers, and establish approval authority based on the value of the orders. There are several causes of the deficiencies we and others have found in the use of interagency contracts, including the increasing demands on the acquisition workforce, insufficient training, and in some cases inadequate guidance. Two additional factors are worth noting. First, the fee-for-service arrangement creates an incentive to increase sales volume in order to support other programs of the agency that awards and administers an interagency contract. This may lead to an inordinate focus on meeting customer demands at the expense of complying with required ordering procedures. Second, it is not always clear where the responsibility lies for such critical functions as describing requirements, negotiating terms, and conducting oversight. Several parties—the requiring agency, the ordering agency, and in some cases the contractor—are involved with these functions. But, as the number of parties grows, so too does the need to ensure accountability. The Congress and the administration have taken several steps to address the challenges of interagency contracting. In 2003, the Congress sought to improve contract oversight and execution by enacting the Services Acquisition Reform Act. The act created a new chief acquisition officer position in many agencies and enhanced workforce training and recruitment. More recently, the Congress responded to the misuse of interagency contracting by requiring more intensive oversight of purchases under these contracts. In July 2004, GSA launched “Get It Right,” an oversight and education program, to ensure that its largest customer, DOD, and other federal agencies properly use GSA’s interagency contracts and its acquisition assistance services. Through this effort, GSA seeks to demonstrate a strong commitment to customer agencies’ compliance with federal contracting regulations and, among other things, improve processes to ensure competition, integrity, and transparency. Additionally, to address workforce issues, OMB, GSA, and DOD officials have said they are developing new skills assessments, setting standards for the acquisition workforce, and coordinating training programs aimed at improving the capacity of the federal acquisition workforce to properly handle the growing and increasingly complex workload of service acquisitions. These recent actions are positive steps toward improving management of interagency contracting, but, as with other areas, some of these actions are in their early stages and others are still under development. In addition, it is too early to tell whether all of the corrective actions will be effectively implemented, although a recent limited review by the GSA inspector general found some improvement at FTS from enhanced management controls. Our work on major management challenges indicates that specific and targeted approaches are also needed to address interagency contracting risks across the government. Ensuring the proper use of interagency contracts must be viewed as a shared responsibility of all parties involved. But this requires that specific responsibilities be more clearly defined. In particular, to facilitate effective purchasing through interagency contracts, and to help ensure the best value of goods and services, agencies must clarify roles and responsibilities and adopt clear, consistent, and enforceable policies and processes that balance the need for customer service against the requirements of contract regulations. Internal controls and appropriate performance measures help ensure that policies and processes are implemented and have the desired outcomes. In addition, to be successful, efforts to improve the contracting function must be linked to agency strategic plans. As with other governmentwide high-risk areas, such as human capital and information security, effectively addressing interagency contract management challenges will require agency management to commit the necessary time, attention, and resources, as well as the executive branch and the Congress to enhance their oversight. Making these investments has the potential to improve the government’s ability to acquire high-quality goods and services in an efficient and effective manner, resulting in reduced costs, improved service delivery, and strengthened public trust. In addition to specific areas that we have designated as high risk, there are other important broad-based challenges facing our government that are serious and merit continuing close attention. One area of increasing concern involves the need for the completion of comprehensive national threat and risk assessments in a variety of areas. For example, emerging requirements from the changing security environment, coupled with increasingly limited fiscal resources across the federal government, emphasize the need for agencies to adopt a sound approach to establishing realistic goals, evaluating and setting priorities, and making difficult resource decisions. We have advocated a comprehensive threat and/or risk management approach as a framework for decision making that fully links strategic goals to plans and budgets, assesses values and risks of various courses of action as a tool for setting priorities and allocating resources, and provides for the use of performance measures to assess outcomes. Most prominently, two federal agencies with significant national security responsibilities—DHS and DOD—are still in the beginning stages of adopting a risk-based strategic framework for making important resource decisions involving billions of dollars annually. This lack of a strategic framework for investment decisions is one of the reasons that implementing and transforming DHS, and DOD’s approach to business transformation, have been designated as high-risk areas. At the same time, this threat/risk assessment concept can be applied to a broad range of existing federal government programs, functions, and activities. The relatively new DHS, with an annual budget of over $40 billion, has not completed risk assessments mandated by the Homeland Security Act of 2002 to set priorities to help focus its resources where most needed. In performing its duties to protect the nation’s critical infrastructure, DHS has not made clear the link between risk assessment and resource allocation, for example, what criteria it initially used to select assets of national importance and the basic strategy it uses to determine which assets warrant additional protective measures, and by how much these measures could reduce the risk to the nation. We have reviewed the work of several of DHS’s component agencies that have taken some initial steps towards risk management, but much remains to be done. DHS’s Immigration and Customs Enforcement (ICE), as a first step toward developing budget requests and workforce plans for fiscal year 2007 and beyond, has had its Office of Investigations field offices conduct baseline threat assessments to help identify risks. However, performance measures to assess how well a particular threat has been addressed were not used for workforce planning in ICE’s fiscal year 2006 budget request. DHS’s Customs and Border Protection (CBP) has taken steps to address the terrorism risks posed by oceangoing cargo containers. However, CBP has not performed a comprehensive set of assessments vital for determining the level of risk for oceangoing cargo containers and the types of responses necessary to mitigate that risk. The need to use a risk management approach has been a recurring theme in our previous work in transportation security. We reported in 2003 that DHS’s Transportation Security Administration (TSA) planned to adopt a risk management approach. To date, including in our most recent work on general aviation security, we have found that TSA has not fully integrated this approach, which includes assessments of threat, vulnerability, and criticality, to help it prioritize its efforts. As a result, we have recommended that TSA continue its efforts to integrate a risk management approach into its processes. DOD, with an annual budget of over $400 billion, exclusive of supplemental funding, is in the process of transforming its force capabilities and business processes. We have reported on limitations in DOD’s strategic planning and budgeting, including the use of overly optimistic assumptions in estimating funding needs, often resulting in a mismatch between programs and budgets. In its strategic plan—the September 2001 Quadrennial Defense Review—DOD outlined a new risk management framework consisting of four dimensions of risk—force management, operational, future challenges, and institutional—to use in considering trade-offs among defense objectives and resource constraints. According to DOD, these risk areas are to form the basis for DOD's annual performance goals. They will be used to track performance results and will be linked to planning and resource decisions. As of December 2004, DOD was still in the process of implementing this approach departmentwide. It also remains unclear how DOD will use this approach to measure progress in achieving business and force transformation. We believe that instilling a disciplined approach to identifying and managing risk has broad applicability across a wide range of federal programs, operations, and functions throughout the federal government. This will be a continuing focus of our work in the future. More generally, we will also continue to monitor other management challenges identified through our work, including those discussed in our January 2003 Performance and Accountability Series: Major Management Challenges and Program Risks (GAO-03-95 through GAO-03-118). While not high risk at this time, these challenges warrant continued attention. For example, at the U.S. Census Bureau, a number of operational and managerial challenges loom large as the agency approaches its biggest enumeration challenge yet, the 2010 Census. The Census Bureau will undertake an important census test and make critical 2010 Census operational and design decisions in the coming months—and we will continue to closely monitor these challenges to assist the Congress in its oversight and the Census Bureau in its decision making. For other areas that remain on our 2005 high-risk list, there have been important but varying levels of progress, although not yet enough progress to remove these areas from the list. Top administration officials have expressed their commitment to maintaining momentum in seeing that high- risk areas receive adequate attention and oversight. Since our 2003 high- risk report, OMB has worked closely with a number of agencies that have high-risk issues, in many cases establishing action plans and milestones for agencies to complete needed actions to address areas that we have designated as high risk. Such a concerted effort by agencies and ongoing attention by OMB are critical; our experience over the past 15 years has shown that perseverance is required to fully resolve high-risk areas. The Congress, too, will continue to play an important role through its oversight and, where appropriate, through legislative action targeted at the problems and designed to address high-risk areas. Examples of areas where noticeable progress has been made include the following: Strategic Human Capital Management. Recognizing that federal agencies must transform their organizations to meet the new challenges of the 21st century and that their most important asset in this transformation is their people, we first added human capital management as a governmentwide high-risk issue in January 2001 to help focus attention and resources on the need for fundamental human capital reform requiring both administrative and legislative action. Since then, the Congress and the agencies have made more progress in revising and redesigning human capital policies, processes, and systems than in the previous quarter century. The Congress has called on agencies to do a better and faster job of hiring the right people with the right skills to meet their critical missions, such as protecting the homeland, and gave the agencies new flexibilities to meet this challenge. The Congress has also granted agencies, such as DOD and DHS, unprecedented flexibility to redesign their human capital systems, including designing new classification and compensation systems, which could serve as models for governmentwide change. Therefore, effectively designing and implementing any resulting human capital systems will be of critical importance not just for these agencies, but for overall civil service reform. As part of the President’s Management Agenda, the administration has also made strategic human capital management one of its top five priorities and established a system for holding agencies accountable for achieving this change. Some agencies have begun to assess their future workforce needs and implement available flexibilities to meet those needs. As a result of the ongoing significant changes in how the federal workforce is managed, there is general recognition that there should be a framework to guide human capital reform built on a set of beliefs that entail fundamental principles and boundaries that include criteria and processes that establish checks and limitations when agencies seek and implement their authorities. Federal Real Property. Since January 2003, the administration has taken several key steps to address long-standing problems in managing federal real property. First, in an effort to provide a governmentwide focus on federal real property issues, the President added the Federal Asset Management Initiative to the President’s Management Agenda and signed Executive Order 13327 in February 2004. Under the order, agencies are to designate a senior real property officer to, among other things, identify and categorize owned and leased real property managed by the agency and develop agency asset management plans. Agencies such as DOD and the Department of Veterans Affairs (VA) have taken other actions—DOD is preparing for a round of base realignments and closures in 2005, and in May 2004, VA announced a wide range of asset realignment decisions. These and other efforts are positive steps, but it is too early to judge whether the administration’s focus on this area will have a lasting impact. The underlying conditions and related obstacles that led to our high-risk designation continue to exist. Remaining obstacles include competing stakeholder interests in real property decisions; various legal and budget-related disincentives to optimal, businesslike, real property decisions; and the need for better capital planning among agencies. Other areas in which improvements have been shown include the Postal Service’s transformation efforts and long-term outlook, modernizing federal disability programs, the Medicaid program, HUD’s Single-Family Mortgage Insurance and Rental Housing Assistance programs, and the implementation and transformation of DHS. We have combined our previous Collection of Unpaid Taxes and Earned Income Credit Noncompliance high-risk areas into an area titled Enforcement of Tax Laws. Collection of unpaid taxes was included in the first high-risk series report in 1990, with a focus on the backlog of uncollected debts owed by taxpayers. In 1995, we added Filing Fraud as a separate high-risk area, narrowing the focus of that high-risk area in 2001 to Earned Income Credit Noncompliance because of the particularly high incidence of fraud and other forms of noncompliance in that program. We expanded our concern about the Collection of Unpaid Taxes in our 2001 high-risk report to include not only unpaid taxes (including tax evasion and unintentional noncompliance) known to the Internal Revenue Service (IRS), but also the broader enforcement issue of unpaid taxes that IRS has not detected. We made this change because of declines in some key IRS collection actions as well as IRS’s lack of information about whether those declines had affected voluntary compliance. Although the Congress dedicated a specific appropriation for Earned Income Credit compliance initiatives (both to curb noncompliance and encourage participation) in fiscal years 1998 through 2003, with the 2004 budget the Congress returned to appropriating a single amount for IRS to allocate among its various tax law enforcement efforts. In recent years, the resources IRS has been able to dedicate to enforcing the tax laws have declined, while IRS’s enforcement workload—measured by the number of taxpayer returns filed—has continually increased. As a result, nearly every indicator of IRS’s coverage of its enforcement workload has declined in recent years. Although in some cases workload coverage has increased, overall IRS’s coverage of known workload is considerably lower than it was just a few years ago. Although many suspect that these trends have eroded taxpayers’ willingness to voluntarily comply—and survey evidence suggests this may be true—the cumulative effect of these trends is unknown because new research into the level of individual taxpayer compliance is only now being completed by IRS after a long hiatus. Based on this new research, in 2005, IRS intends to release a new estimate of noncompliance and begin to use this research to improve targeting of enforcement and other compliance resources. Further, IRS’s workload has grown ever more complex as the tax code has grown more complex. Complexity creates a fertile ground for those intentionally seeking to evade taxes and often trips others into inadvertent noncompliance. IRS is challenged to administer and explain each new provision, thus absorbing resources that otherwise might be used to enforce the tax laws. At the same time, other areas of particularly serious noncompliance have gained the attention of IRS and the Congress—such as abusive tax shelters and schemes employed by businesses and wealthy individuals that often involve complex transactions that may span national boundaries. Given the broad decline in IRS’s enforcement workforce, the resulting decreased ability to follow up on suspected noncompliance, the emergence of sophisticated evasion concerns, and the unknown effect of these trends on voluntary compliance, IRS is challenged on virtually all fronts in attempting to ensure that taxpayers fulfill their obligations. IRS’s success in overcoming these challenges becomes ever more important in light of the nation’s large and growing fiscal pressures. Accordingly, we believe the focus of concern on the enforcement of tax laws is not confined to any one segment of the taxpaying population or any single tax provision. Our designation of the enforcement of tax laws as a high-risk area embodies this broad concern. IRS has long relied on obsolete automated systems for key operational and financial management functions, and its attempts to modernize these aging computer systems span several decades. This long history of continuing delays and design difficulties and their significant impact on IRS’s operations led us to designate IRS’s systems modernization activities and its financial management as high-risk areas in 1995. Since that time, IRS has made progress in improving its financial management, such as enhancing controls over hard copy tax receipts and data and budgetary activity, and improving the accuracy of property records. Additionally, for the past 5 years, IRS has received clean audit opinions on its annual financial statements and, for the past 3 years, has been able to achieve these opinions within 45 days of the end of the fiscal year. However, IRS still needs to replace its outdated financial management systems as part of its business systems modernization program. Accordingly, since the resolution of IRS’s remaining most serious and intractable financial management problems largely depends upon the success of IRS’s business systems modernization efforts, and since we have continuing concerns related to this program, we are combining our two previous high-risk areas into one IRS Business Systems Modernization high-risk area. We recently compiled lists of products issued since January 2003 related to the major management challenges identified in the 2003 Performance and Accountability Series. These lists, accompanied by narratives describing the related major management challenges, are available on our Web site at www.gao.gov/pas/2005. As always, GAO stands ready to assist the Congress as it develops its agenda and pursues these important high-risk issues. Mr. Chairman, Senator Akaka, and Members of the Subcommittee, this concludes my testimony. I would be happy to answer any questions you may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO's audits and evaluations identify federal programs and operations that, in some cases, are high risk due to their greater vulnerabilities to fraud, waste, abuse, and mismanagement. Increasingly, GAO also is identifying high-risk areas to focus on the need for broad-based transformations to address major economy, efficiency, or effectiveness challenges. Since 1990, GAO has periodically reported on government operations that it has designated as high risk. In this 2005 update for the 109th Congress, GAO presents the status of high-risk areas identified in 2003 and new high-risk areas warranting attention by the Congress and the administration. Lasting solutions to high-risk problems offer the potential to save billions of dollars, dramatically improve service to the American public, strengthen public confidence and trust in the performance and accountability of the federal government, and ensure the ability of government to deliver on its promises. In January 2003, GAO identified 25 high-risk areas; in July 2003, a 26th highrisk area was added to the list. Since then, progress has been made in all areas, although the nature and significance of progress varies by area. Federal departments and agencies, as well as the Congress, have shown a continuing commitment to addressing high-risk challenges and have taken various steps to help correct several of the problems' root causes. GAO has determined that sufficient progress has been made to remove the high-risk designation from three areas: student financial aid programs, FAA financial management, and Forest Service financial management. Also, four areas related to IRS have been consolidated into two areas. This year, GAO is designating four new high-risk areas. The first new area is establishing appropriate and effective information-sharing mechanisms to improve homeland security. Federal policy creates specific requirements for information-sharing efforts, including the development of processes and procedures for collaboration between federal, state, and local governments and the private sector. This area has received increased attention but the federal government still faces formidable challenges sharing information among stakeholders in an appropriate and timely manner to reduce risk. The second and third new areas are, respectively, DOD's approach to business transformation and its personnel security clearance program. GAO has reported on inefficiencies and inadequate transparency and accountability across DOD's major business areas, resulting in billions of dollars of wasted resources. Senior leaders have shown commitment to business transformation through individual initiatives in acquisition reform, business modernization, and financial management, among others, but little tangible evidence of actual improvement has been seen in DOD's business operations to date. DOD needs to take stronger steps to achieve and sustain business reform on a departmentwide basis. Further, delays by DOD in completing background investigations and adjudications can affect the entire government because DOD performs this function for hundreds of thousands of industry personnel from 22 federal agencies, as well as its own service members, federal civilian employees, and industry personnel. OPM is to assume DOD's personnel security investigative function, but this change alone will not reduce the shortages of investigative personnel. The fourth area is management of interagency contracting. Interagency contracts can leverage the government's buying power and provide a simplified and expedited method of procurement. But several factors can pose risks, including the rapid growth of dollars involved combined with the limited expertise of some of agencies in using these contracts and recent problems related to their management. Various improvement efforts have been initiated to address this area, but improved policies and processes, and their effective implementation, are needed to ensure that interagency contracting achieves its full potential in the most effective and efficient manner.
DLA is a DOD Combat Support Agency under the supervision, direction, authority, and control of the Under Secretary of Defense for Acquisition, Technology, and Logistics. DLA’s mission is to provide its customers—the military services and federal civilian agencies—with effective and efficient worldwide logistics support as required. DLA buys and manages a vast number and variety of items for its customers, including commodities such as energy, food, clothing, and medical supplies. DLA also buys and distributes hardware and electronics items used in the maintenance and repair of equipment and weapons systems. Customers determine their requirements for materiel and supplies and submit requisitions to any of four DLA supply centers. The centers then consolidate the requirements and procure the supplies for their customers. DLA provides its customers with requested supplies in two ways: some items are delivered directly from a commercial vendor while other items are stored and distributed through a complex of worldwide distribution depots that are owned and managed by both DLA and the military services. DLA refers to this ordering and delivery process as materiel management or supply-chain management. Figure 1 provides a snapshot of this process. Because DLA is the sole supplier for many critical items that can affect the readiness of the military services, the agency strives to provide its customers with the most efficient and effective logistics support. Thus, DLA has adopted a policy to provide customers with “the right item, at the right time, right place, and for the right price, every time.” In an effort to institutionalize this customer support concept, DLA has adopted the Balanced Scorecard approach to measure the performance of its logistics operations. The scorecard, a best business practice used by many private and public organizations, is intended to measure DLA’s performance by integrating financial measures with other key performance indicators around customers’ perspectives; internal business processes; and organization growth, learning, and innovation. Our work showed that customers at the eight locations we visited expressed satisfaction and dissatisfaction with the services the agency provides. On the one hand, customers are generally satisfied with DLA’s ability to quickly respond to and deliver requests for routine, high-demand, in-stock items; provide customers with an easy-to-use ordering system; and manage an efficient prime vendor program. On the other hand, customers at some locations were dissatisfied that, among other things, DLA is unable to obtain less frequently needed, but critical, items and parts and provide accurate and timely delivery status information. Some customers did not express an opinion on the overall quality of customer service. One aspect of DLA customer support is to provide customers with supplies when they need them. Common supplies include vehicle parts such as pumps, hoses, filters, and tubing. Timeliness, which sometimes requires deliveries to be made in a day or less, can vary with customers, depending on the particular item. However, customers at all locations we visited commented that they were generally satisfied with DLA’s ability to provide most supply items in a time frame that meets their needs. Customers stated that the majority of the routine, frequently demanded supplies they order through DLA are delivered quickly—a view that is also supported by a February 2002 DLA performance review. The review concluded that the majority of requisitions (over 85 percent) was filled from existing inventories within DLA’s inventory supply system. Similarly, a 2001 Joint Staff Combat Support Agency Review Team assessment of DLA’s support to the unified commands indicated that overall, DLA received outstanding comments regarding its ability to provide its customers with timely supplies and services. Customers were also satisfied with the ease in ordering supplies such as the pumps, hoses, and filters mentioned above. Customers stated that even though they conduct large amounts of business through DLA, they had few problems with the ordering process. This occurs because, according to some customers, ordering is facilitated by effective on-line systems that work well and have readily available information. Another method that DLA uses to ensure customer satisfaction is its prime vendor program, which DLA instituted to simplify the procurement and delivery of such items as subsistence and medical or pharmaceutical supplies that commonly have a short shelf life. The program enables customers to directly interact with vendors, thereby reducing the delivery time for these supplies. Two customers of these DLA-managed prime vendor programs told us the programs effectively reduced delivery time. For example, at one location, prime vendors reduced the delivery time of food items from 7 days—the time it took to deliver the items when purchased from DLA—to 2 days for items purchased directly from prime vendors. The customers we spoke with at a medical supply unit told us they were so pleased with the prime vendor’s quick delivery time that they intend to obtain even more medical supplies from the prime vendor. They also told us that the prime vendor provides an additional service in the form of monthly visits to assess customer satisfaction with its services. The unit pointed out that DLA’s customer support representatives are less likely to make such frequent visits. Although customers seemed pleased with the way DLA handles routinely available items, some raised concerns over the agency’s ability to provide critical items such as weapon system parts, timely and accurate information on the status of ordered items, and proactive management for high-priority requisitions. A Combat Support Agency Review Team assessment in 1998 also surfaced similar issues. Additionally, customers we talked to criticized how DLA manages customer-owned assets in DLA warehouses. As previously noted, DLA strives to provide the timely delivery of all supplies and parts, including common consumable supply items like food; clothing and hardware; and critical parts for weapons systems such as tanks, helicopters, and missiles. Customers at four locations we visited told us that DLA was not able to timely deliver some critical items, such as weapons systems parts, which significantly affected their equipment readiness. A number of customers told us that the items they have difficulty obtaining from DLA are those that are more costly or infrequently required. At two locations, customers used parts from existing equipment (known as “parts cannibalization”) because they were unable to obtain the parts they needed. At two other locations, customers said they grounded aircraft and/or deployed units without sufficient supplies. Customers at one location experienced an over-6-month delay in obtaining helicopter parts. As a result, customers at this location told us that some of the unit’s helicopters were unable to fly their missions. We reported in November 2001 that equipment cannibalizations adversely affect the military services, resulting in increased maintenance costs, and lowered morale and retention rates because of the increased workload placed on mechanics. One customer also told us that DLA does not provide adequate information about items requiring long procurement lead times. The customer stated that having this information more readily available would aid customers in making decisions about the types and quantities of items they should retain to minimize the impacts of long DLA lead times. The 1998 Combat Support Agency Review Team’s assessment conducted at military service field activities found that even though DLA met its overall supply availability goal of 85 percent, the remaining 15 percent of items that were not available “almost certainly includes a number of items that are critical to the operation of essential weapon systems.” The assessment attributed this shortfall to flaws in DLA’s requirements determination models, which are used to estimate customers’ demands so that DLA can maintain sufficient inventory quantities. The study further stated that customers are not satisfied with the delivery time for items that are not in stock. In fact, in April 2002, the overall logistics response time was almost 100 days for nonstocked items—a problem that appears to have persisted for the last several years, in spite of efforts to reduce this time. Customers at four locations provided us with examples of back-ordered items having lead times in excess of 1 year, such as navigational instruments and airframe parts. In discussing this issue further with DLA headquarters officials, they acknowledged that this is a problem and are working on a number of initiatives to address customers’ concerns. Customers need accurate and timely information on the status of their orders so they can plan equipment maintenance schedules to optimize the readiness of existing equipment. However, customers at six locations were frustrated with obtaining accurate and timely information from DLA item managers and the automated systems that are intended to provide status information on requisitions. Customers at three locations said that when they tried to directly contact item managers by telephone, the managers often could not be reached and voice-mail messages were seldom returned. Furthermore, military service customers told us that DLA’s automated requisition systems often do not contain accurate status data. Of particular concern to customers are the expected shipping or delivery dates posted on the automated systems. These dates show when parts will be available and allow units to coordinate maintenance schedules. If the dates are incorrect, units cannot effectively plan to have equipment available to be repaired. We discussed this concern with DLA headquarters officials, who told us they are investigating the problem. Another significant concern raised by customers at three locations was that DLA is not proactive in seeking alternate ways to obtain critical items that are not immediately available within DLA’s supply system. DLA typically places such items on back order, which, to meet mission needs, places a burden on customers to find their own means to obtain the necessary items right away. A number of customers at these three locations said they felt that DLA, in an effort to be more customer focused, should do more to seek out alternate sources of supply to alleviate these high-priority back orders. Some customers also remarked that the required efforts for them to call vendors and solicit bids is a problem for their unit because of limited staffing levels and lack of contracting capabilities. In one instance, an aviation supply unit requisitioned a critical part from DLA that was needed to repair a helicopter unable to fly its mission. This requisition was placed on back order by DLA, and delivery was not expected to occur until 8 months later. Because of the critical nature of the needed part, the unit had to search for other means to obtain the part sooner. In fact, the unit directly contacted the same vendor that DLA was working with to fill the back orders and learned that the vendor had stock on hand and would be able to ship the item immediately. The unit subsequently purchased the part from that vendor instead of waiting for it to be available from DLA. In another instance, a DLA item manager informed an aircraft maintenance depot customer that $2 million worth of critical parts for a helicopter engine overhaul program would be placed on back order because the parts were not available from the DLA vendor. In researching listings for property to be disposed of, the customer found the required parts—still new and unopened in the manufacturers’ container—available for redistribution or sale within DLA’s disposal system. As a result, the customer initiated a shipping request to procure the $2 million in helicopter parts for only the cost to ship the items. DLA manages all warehousing functions at locations where a DLA distribution depot is collocated with a military activity. Management functions include, among other things, logging in and storing equipment. During the course of our interviews, customers raised concerns over DLA’s handling of these functions. At three of the sites we visited, the customers perceived that their assets were not being serviced and maintained as required. Their concerns centered on DLA’s process for recording the ownership of equipment and the commingling of different customers’ inventories. To assign asset ownership, DLA “codes” items in its automated inventory system. That is, DLA assigns unique codes to differentiate between Army, Navy, Marine Corps, Air Force, and DLA-owned assets. However, customers at three locations we visited stated that in numerous instances, DLA assigned inventory items to the wrong management account, thus creating the possibility that an item ordered and paid for by one unit or service could be issued to another. One location we visited had documented over $1 million worth of items coded into the wrong management account. Another location identified $621,000 worth of incorrectly coded items. Before the errors were corrected, neither activity could access the materials they needed. As a result, both locations invested unnecessary amounts of time and money in correcting DLA’s error. During our review, we brought this issue to the attention of DLA officials, who indicated that they would investigate the problem. Customers also expressed concerns about the commingling of service- owned assets with DLA-owned assets in DLA-managed warehouses. Like inaccurate coding, commingling creates a significant risk that items will be issued by the warehouse to someone other than the purchasing unit. As a result, the items would not be available to the true owner when needed. Also, for equipment items that need periodic inspection and repair, there is a risk the owner will expend resources to perform maintenance or repairs but not be able to retrieve the item because DLA mistakenly issued that item to a different requisitioning entity or military service. As a result, the “true owner” could have needlessly spent resources on items given to somebody else and also be left with items still needing repair. In discussions with DLA headquarters officials, they acknowledged the problem and told us that DLA is taking steps to address it with a National Inventory Management Strategy, which is part of DLA’s goal to better manage its supply chain effectiveness. DLA’s approach for obtaining customer service feedback has been of limited usefulness because it lacks a systematic integrated approach for obtaining adequate information on customer service problems. As a result, the agency does not have the information necessary to identify its customers’ concerns, and more importantly, to initiate actions for improving customer service, thereby placing at risk DLA’s ability to meet its overall goal of providing quality service to the war fighter. In particular, DLA has not (1) adequately identified all of its customers, (2) effectively solicited customer feedback, and (3) clearly identified those accountable for ensuring customer satisfaction. Obtaining good meaningful feedback from customers means knowing who those customers are. DLA broadly defines a “customer” as someone who purchases items or directly causes products to be bought, but DLA has not identified who those individuals are from the multitude of organizations it deals with. DLA’s current portfolio of customers is identified by approximately 49,000 address codes, known as DOD Activity Address Codes (DODAACs). The military services assign DODAACs to various organizations and activities for ordering supplies. However, these address codes, a legacy of a system built in the 1960s, contain little information about the customer’s organization beyond a physical address. No meaningful customer contact point is associated with the codes or, in many cases, a specific organization that DLA can use as a basis for interaction with the customers using their services. As a result, DLA has no effective process to initiate and maintain contact with its customers for soliciting feedback. Without such a customer interface process, DLA has no routine means to understand customers’ needs and to take appropriate corrective actions to address those needs. Our efforts to identify and interview DLA customers were hindered because a single DODAAC does not necessarily equate to a single customer. In many cases we found that one organization interacts with DLA using a number of DODAACs. For example, DLA’s customer database shows over 580 DODAACs for Fort Bragg. However, according to DLA and Army officials, the number of Fort Bragg customer organizations interacting with DLA for these same DODAACs is smaller. The reason for this is that, in part, central order points at Fort Bragg are responsible for submitting and tracking orders for a number of smaller organizations, thereby covering multiple DODAACs. In addition, each of these organizations also uses multiple DODAACs to differentiate between various types of supply items, such as repair parts and construction materials. For example, one DODAAC is used for ordering numerous repair parts while another is used for ordering construction materials. One of these customer organizations at Fort Bragg is the Division Support Command of the 82nd Airborne Division, which interacts with DLA for supplies ordered using 159 different DODAACs. Thus, many DODAACs could represent only one customer. Figure 2 illustrates the relationship between the DODAACs used by DLA to define customers and the Division Support Command. A principal aspect of DLA’s strategic plan is for managers to focus on customers’ needs and improve customer satisfaction by listening to customers about the quality of service they receive—both good and bad— and making changes necessary to enhance that service. DLA uses customer surveys, customer support representatives, and focus groups to obtain feedback from its customers on their level of satisfaction with the services DLA provides. For example, DLA conducts quarterly mail-out surveys to measure overall customer satisfaction levels. It also places customer support representatives at selected customer organizations to assist customers in planning, implementing new supply initiatives, and solving problems. However, we noted several weaknesses in these methods. Specifically, (1) the satisfaction survey response rates are too low to provide meaningful statistical analyses of customer satisfaction, (2) the survey instrument does not provide a sufficient means to understand why customers may be less than satisfied, and (3) customer support representatives are more reactive than proactive in soliciting customer feedback. The quarterly mail-out surveys that DLA uses to measure customer satisfaction elicit a relatively low number of responses from DLA customers, significantly limiting its usefulness in soliciting customer feedback. The survey response rates were too low to provide meaningful statistical analyses of customer satisfaction. The response rate for the 33,000 surveys that DLA mailed out in fiscal year 2001 averaged around 23 percent, and only about 20 percent for the August 2001 cycle (the latest cycle where results have been made available). As such, less than one quarter of DLA’s customers are providing input on how they perceive DLA support and what problems they are experiencing that may need to be addressed. Large survey organizations like Gallup attempt to get response rates of between 60 and 70 percent for their mail surveys. Experts on customer satisfaction measurement have stated that although survey response rates are never 100 percent, an organization should strive to get its rate as close as possible to that number. They suggest that ideally, organizations can obtain response rates of over 70 percent. The experts also noted that organizations conducting surveys commonly make the mistake of assuming that if a final sample size is large, the response rate is unimportant. This leads organizations to accept response rates well under 25 percent. However, such low rates can lead to serious biases in the data. Having an inadequate understanding of who its customers are likely contributes to DLA’s problem with low response rates. The surveys are mailed to addresses associated with the DODAACs and include with each survey a message asking that the survey be provided to a person most familiar with requisitioning and ordering supplies. However, during the fiscal year 2001 survey period, over 2,200 of the 33,000 surveys mailed (about 7 percent) were returned to DLA as “undeliverable” or were delivered to people who were no longer customers. Furthermore, another 128 respondents noted in their survey returns that they do not consider themselves to be customers. DLA officials stated that the undeliverable rate increases when there are many units that move to other locations or when service officials do not update DODAACs for changed addresses. The quarterly mail-out survey asks customers to rate their overall satisfaction with DLA products and services, along with specific aspects of support, such as providing products in time to meet needs and effectively keeping customers informed. While these surveys provide general aggregate information on the levels of customer satisfaction, they do not provide the means to understand why customers may be less than satisfied. For example, a number of customers we interviewed voiced concern over the fact that status dates for back-ordered items were either sometimes wrong or varied between different inventory systems. The survey might indicate only an overall low level of satisfaction in the area of keeping customers informed but would not provide a reason. If this problem were systemic throughout DLA, there would be less of an opportunity to take immediate corrective action. Most recently, in June 1999, DLA supplemented a quarterly survey with two focus groups targeted at soliciting specific customer feedback on DLA’s communication efforts. While DLA determined the focus groups to be an excellent feedback mechanism, the sample size was too small for DLA to run a statistical analysis of the data obtained; and the topics for discussion were limited to customer communication. DLA officials stated that they use a number of methods to obtain customer feedback. These include analyses of survey results, focus groups, and structured interviews. However, they acknowledged that the usefulness of these methods is somewhat limited owing either to low response rates; limited discussion topics; small sample sizes; or, in the case of structured interviews, the fact that the most recent ones were conducted in 1997. DLA’s own survey results also indicate the flaws with its survey techniques. For example, DLA’s fiscal year 2000 survey results show that customers rated as “low satisfaction” their ability to reach the right DLA person to meet their needs. However, the survey noted that “due to its high importance to customers and the myriad of interpretations of ‘less than satisfied’ responses to this attribute, more information will need to be gathered” to determine what issues are preventing customers from reaching the right person. This indicates that DLA’s survey was not adequate to get behind the underlying causes of customer dissatisfaction. In fact, with respect to low satisfaction ratings, the survey reports for fiscal years 2000 and 2001 recommended that DLA conduct one-on-one interviews to identify why customers were not satisfied with DLA services. Another difficulty that DLA encounters in using mail-out satisfaction surveys to identify customer problems is that the surveys are designed to protect the confidentiality of the respondents, which limits DLA’s ability to follow up with customers for adequate feedback. As a result, there is no means to follow-up with customers expressing low satisfaction levels to identify specific problems or to determine what, if any, corrective actions are needed. During our meetings with DLA customers, we were able to identify specific problems only by engaging in a dialogue with them about their experiences. In conducting these in-depth discussions on aspects of the supply process such as placing orders, obtaining the status of outstanding requisitions, receiving supply items, and obtaining customer service, we were able to ask follow-up questions to determine exactly what problems they were experiencing in some of these areas. Another method DLA uses to facilitate customer service is the placement of customer support representatives at key customer locations. The use of these on-site representatives has the potential to provide DLA with a good link to its customers. In fact, some customers at three locations we visited specifically noted their satisfaction with the assistance the representatives provided. However, according to DLA headquarters officials, customer support representatives have been more reactive in that they help customers resolve only specific problems or assist in implementing new initiatives as requested. DLA headquarters officials told us that the representatives neither proactively solicit feedback on a regular basis from the multitude of customers in their geographical area nor reach out to identify the types of problems customers are experiencing. Furthermore, not all representatives are in contact with all DLA customers at their assigned locations. For example, at one location we visited, the representative was working closely with a specific customer organization. According to officials at this location, the representative has been very helpful to them in resolving supply problems and implementing new initiatives. However, a number of other customers at this location said they do not use the customer support representative at all because they use other options, such as call centers. Some customers noted that they were not even aware that there was such a representative in the area. The Combat Support Agency Review Team’s assessment in 1998 also found that some customers were unaware that customer support representatives even existed. The study identified a need for DLA to improve its interaction with customers and suggested that DLA “get out more and visit the customers” to identify and correct problems. Headquarters officials told us they assign customer support representatives to DLA’s larger customers, which account for about 5 percent of the overall customer population and 80 percent of the agency’s business. Officials also stated they recognize that the customer support representative program is not as effective as it should be. As a result, the agency currently has initiatives under way to (1) provide more customer support representatives and training, (2) standardize the representatives’ roles, and (3) make the representatives more proactive in serving customers. An important part of providing effective customer service is simplifying customers’ access to the organization, such as through centralized contact points. In addition, best practices research emphasizes the need for a single, centralized management framework for receiving customer feedback so that all information about the customers can be linked together to facilitate a more complete knowledge of the customer. However, DLA does not provide a “single face” to its customers for addressing their issues. To obtain assistance, customers sometimes need to navigate through a number of different channels, none of which are interconnected. This process causes confusion with customers and fragmented accountability throughout DLA for customer satisfaction. When customers order multiple types of supply items, they must use many channels, depending on the type of item, to obtain assistance from DLA. However, as DLA has noted, there is no single DLA contact point responsible for resolving customers’ problems for all the items they requisition. For example, the supply centers are responsible for managing specific weapons system parts or types of commodities. As such, problem resolution is performed through each supply center, depending on the type of item the customer is ordering. To obtain assistance with requisitions, customers must contact the appropriate supply center, generally through its customer “call center,” which is an activity dedicated to provide customer assistance for the particular items. In addition, Emergency Supply Operation Centers are available at each supply center for high- priority items. Also, customers can contact individual item managers at the supply centers to resolve problems with their orders. At three locations, some customers told us they are sometimes confused over whom to call and reported difficulties with getting in touch with the right person to resolve their problems. Customers at four locations were also frustrated with the quality of assistance provided by DLA, noting that while some of the DLA representatives were helpful, others were not able to give them the assistance they needed. To illustrate further, one aviation supply unit we visited had high-priority, back-ordered requisitions from each of the three DLA supply centers in Richmond, Virginia; Columbus, Ohio; and Philadelphia, Pennsylvania. As a result of these back orders, some of the unit’s aircraft were unable to operate because of maintenance needs. In order to get assistance with these requisitions, either to request help in expediting the order or to obtain better status information, unit supply personnel needed to contact the call centers or the Emergency Supply Operation Centers at each of the supply centers, depending on the item. If there were a single DLA point of contact, the unit could go to that contact for assistance with all the items on its list of priority requisitions. Another problem with DLA’s having many separate lines of communication with its customers is that meaningful information about those customers is not collected centrally for analysis. For example, each of the supply centers accumulates vital information about customer satisfaction through its contacts with customers. For instance, customers express specific problems they are having when getting help through the call centers. They might also convey information on problems they are having to various supply center teams conducting on-site visits for purposes of training or other liaison activities. However, this information is neither shared between the supply centers nor provided to the DLA corporate level for a global review. As a result, no analysis of this information can be made to identify systemic problems or any accountability at one place for a given customer to ensure that its concerns are being addressed. While DLA has initiatives under way to improve its customer service, there are opportunities to enhance these initiatives to provide for an improved customer feedback program. DLA has recognized that it is not as customer focused as it should be and is developing a new strategy to improve its relationship with its customers. This new strategy, referred to as the Customer Relationship Management initiative, lays out an improved approach to customer service that creates a single DLA face to customers and focuses on customer segments to develop a better understanding of the customer. However, DLA’s initiatives do not completely address the limitations we identified in its current approaches for obtaining customer service feedback, such as by improving the way that it solicits feedback from individual customers. Research on best practices for customer service shows that successful organizations utilize multiple approaches to listen to their customers. These approaches include transaction surveys, customer interviews, and complaint programs that provide qualitative and quantitative data. The research also points to a need for centrally integrating all customer feedback so that managers can achieve a better understanding of customers’ perceptions and needs. In February 2002, DLA’s Deputy Director stated that DLA “has been internally focused rather than customer focused” and that its culture has been to talk to customers only “when problems arose.” To address this problem, DLA has begun a multimillion-dollar initiative aimed at focusing its business operations to better deliver important customer outcomes and actively managing relationships with its customers. This effort, known as Customer Relationship Management, is being developed in conjunction with DLA’s broader strategic planning initiatives such as Business Systems Modernization and implementation of the Balanced Scorecard approach to performance measurement. To implement Customer Relationship Management, DLA expects to spend about $73 million during fiscal years 2002-2008. According to DLA officials, when this effort is complete, DLA expects its customer service program to be on the same level as those in place at leading organizations in the private sector. The concept of the Customer Relationship Management initiative is a step in the right direction toward significantly improving DLA’s relationship with its customers. For example, part of the management initiative is a plan to radically change the focus of its business practices and improve its interactions with customers. To do this, DLA is grouping customers by business segment, collaborating with these segments to achieve a better understanding of their needs, and tailoring logistics programs to the unique needs of the segments. Examples of business segments include deployable combat forces, industrial facilities, and training activities. Table 1 illustrates the proposed customer segments, which will include major military service commands. In an effort to streamline the numerous customer-reporting channels currently in place, DLA plans to establish a multilevel-focused account manager structure and increase accountability. DLA hopes that this effort will reduce the number of channels a customer must navigate to obtain assistance and focus accountability for customer satisfaction on account managers rather than on item managers. DLA plans to establish account managers at three levels: National Account Managers are to collaborate with military services at the departmental level, for demand planning and problem resolution. Customer Account Managers are to be the “single DLA face” to each customer segment. These managers are to collaborate with executives at the segment level to develop service-level agreements that outline customer segment needs and to resolve issues at the segment level. Customer Support Representatives are working-level DLA personnel who, on a day-to-day basis, work with specific customers within a segment, providing on-site assistance as appropriate. In addition, DLA plans to place its existing customer contact points, such as call centers and Emergency Supply Operation Centers, under the control of account managers instead of the supply centers. Although the Customer Relationship Management initiative is conceptually sound, the program’s implementation actions do not completely address the limitations we identified in its current practices. For example, the new strategy does not lay out milestones for implementing the program or specific improvements on how DLA solicits detailed feedback from its individual customers on their perceptions of service and the specific problems they are experiencing. The strategy also does not include a process for developing actions in response to issues that customers have identified and involving customers in that process. Furthermore, even though the plans include making account managers responsible for collecting customer feedback and exploring the idea of using Web-based tools to obtain customer feedback, they do not lay out specific tools or processes to accomplish this. To further illustrate, under the new Customer Relationship Management plan, an account manager would be created with responsibility for all customers within the U.S. Army Forces Command, which represents the Army’s deployable forces segment. (See table 1.) This manager would work with the Army’s customer representatives to identify customers’ needs at the Forces Command level and reach formal agreements on service. However, there is no revised set of tools in the plan for collecting detailed feedback on an ongoing basis from the individual customer organizations representing the more than 6,600 DODAACs (address codes that represent mailboxes, locations, or people) in the Forces Command. Furthermore, the improvement initiatives do not provide for actions to link military service customer DODAACs to specific accountable organizations. Under the Customer Relationship Management program, DLA has developed a customer profile database that links DODAACs to major military commands, such as the U.S. Army Forces Command. It also plans to link each DODAAC to a business segment through this database sometime in the future. However, as noted previously, the major command and business segment levels comprise numerous DODAACs. Interaction with customers to get detailed feedback on their level of satisfaction requires better identification of customer organizations beyond the data currently associated with a DODAAC. Studies examining best practices in the area of customer service have found that leading organizations use multiple approaches to listen to their customers’ concerns. In particular, a 2001 Mid-American Journal of Business study pointed out that best practice companies use multiple tools to gather these data rather than relying on a single method such as a customer survey, which might be too narrow in scope and limited in its application to fully capture customers’ concerns. The 2001 Mid-American Journal study and others concluded that the best approach for obtaining customer feedback is to use a broad measurement system with more than one listening tool to capture customers’ input from many different perspectives. Using different tools alone is not enough to effectively obtain customer feedback. Centrally linking the feedback obtained is also important. Best practices research shows that information obtained through various methods needs to be integrated in order to gain a more complete understanding of customers. Thus, by linking all the various feedback tools in a standard and consistent manner, the organization would have better diagnostic information to guide improvement efforts. On the basis of our discussions with private sector experts and our reviews of literature on customer service best practices, leading organizations such as AT&T WorldNet Services, U.S. West, and Eastman Chemical combine quantitative and qualitative listening tools to obtain customer feedback and then centrally integrate the data in one location. Quantitative tools include such methods as customer satisfaction surveys and customer complaints, which can provide measurable data for use in performance scorecards. Qualitative tools include focus groups, personal interviews, and observation and are used by organizations to provide a more in-depth understanding of their customers. According to the research, not all tools are appropriate for all organizations, and the research points out that careful selection is therefore important. Examples of “listening” tools being used by the best practices organizations we identified through our reviews of best practice studies follow: Customer satisfaction surveys. Research shows that most major organizations use listening tools such as relational and critical incident surveys to periodically capture customers’ overall perceptions about their organization and to measure satisfaction with specific transactions soon after they occur. These surveys can be administered through the mail, such as with DLA’s quarterly satisfaction survey; by telephone; in person; or electronically via the Internet. However, feedback from mail and electronic-based surveys can be more limited than that obtained through other methods because there is no opportunity to probe the respondent for better, more-detailed information. AT&T WorldNet Services, U.S. West, Eastman Chemical, and Hewlett-Packard are among the leading organizations that are turning to critical incident surveys in conjunction with other tools to learn more about customers’ perceptions. Critical incident surveys are becoming more popular in the private sector because they provide information related to specific processes, which can be used to make specific improvements. Customer complaints. Gathering complaint data is a standard practice for most companies. All aspects of the customer complaint process are measured and tracked through this mechanism. Information collected and analyzed from this approach includes the nature of the complaint, speed of resolution, and customer satisfaction with the resolution. Eastman Chemical, for example, uses customer complaint data in conjunction with a survey tool to obtain customer feedback. It organizes the complaint data along the same attributes as the survey data. Benchmark surveys. Benchmark surveys gather perceptions of performance from the entire market. These surveys usually gather customer perceptions of performance about top competitors in an industry. This allows the company to examine its customer-perceived strengths and weaknesses in the overall marketplace. Best practices companies, such as Sun Microsystems, use this information primarily in the strategic planning process to identify their competitive advantage in the marketplace and to identify opportunities and shortfalls in the industry. While continuous improvement may be a result of this listening tool, the real value, according to the research in this area, comes from breakthrough thinking to gain a sustainable advantage. Won-lost and why surveys. “Lost” customers—those who do not replace orders with a company—can be an excellent source of valuable information. Some companies, such as Eastman Chemical, employ “won- lost and why” surveys to measure actual customer behavior and the rationale behind the behavior. This survey is utilized on a current basis, being administered to customers soon after they are “won” or “lost” (i.e., decide to drop a company). For example, if a customer is won or lost, the company then probes the customer as to why its business was won or lost. For companies with a large number of customers, this tool may be implemented in a survey. Focus groups. Organizations use focus groups to get better information from customers than survey results provide. In these groups, customers are probed about why they answered survey questions the way they did. DLA has used focus groups to get detailed feedback on a single topic, but as noted previously, the number of individuals making up the focus groups was too small to draw agency-wide conclusions. AT&T Universal Card Services (now part of Citigroup) conducts multiple focus groups per year to discuss a wide range of topics. In these forums, both satisfied and dissatisfied customers discuss the company’s service, products, and processes. Customer interviews. Conducting interviews with customers can provide a way to get very detailed information about their specific needs and problems. Like focus groups, this tool is used by leading customer service organizations to probe survey respondents as to why they answered survey questions a certain way. U.S. West identifies dissatisfied customers from its surveys and follows up with them to determine what problems they are having and how they can be fixed. Customer observation. In performing observations, organizations send teams to visit customers where they observe how those customers interact on a daily basis with the organization. This tool complements verbal information obtained through customer interviews and focus groups in that it provides confirmation to and a deeper understanding of that information. Management listening. Using this tool, managers listen in on actual customer calls to the organization to learn first-hand about what customers are experiencing. In an example of this technique, one best practice company encourages all of its managers, including the chief executive officer, to listen to customer calls. Customer service representatives. Collecting information from those employees who are in continuous direct contact with customers provides valuable information to best practice organizations. Often, these representatives are among the first to recognize customer problems. As mentioned previously, DLA uses customer support representatives to obtain feedback. However, according to DLA officials, it does not currently have enough representatives assigned to its customers, and the representatives generally are not proactive in obtaining customer feedback. Furthermore, while DLA’s representatives provide headquarters with monthly written reports on customer support, best practice organizations have taken this a step further by using electronic feedback mechanisms. Research shows that best practice organizations have their customer service representatives gather ideas, perceptions, and opinions from customers and report them electronically through a corporate intranet system. These data are then coded and distributed throughout the organization, thereby centrally integrating the feedback information. Figure 3 shows an example of how multiple approaches can be linked, as illustrated by AT&T Universal Card Services’ use of a “Customer Listening Post” team. While high-quality service to its customers is an overall goal, DLA lacks the information necessary to systematically assess the quality of service it is providing its customers with. Indications are that customers, while satisfied in some areas, are dissatisfied in others. The failure to address areas of dissatisfaction means opportunities to improve supply readiness are being missed. DLA is in the process of developing a program to improve its customer service relationships, but it currently does not have in place an effective mechanism that systematically gathers and integrates information on customer service views so that solutions can be identified to better meet their needs. The agency’s current practices do not always surface these concerns, or more importantly, provide information on why they exist or how they can be corrected. To its credit, DLA is undertaking a number of initiatives to improve the effectiveness of its customer relationship improvement efforts. However, these initiatives do not completely address the limitations of its current approaches for obtaining customer feedback because DLA (1) has not yet fully determined who its customers are or how best to serve their needs; (2) has not established the means to determine the underlying causes for customer dissatisfaction in order to fully achieve its strategic goal of providing customers with the most efficient and effective worldwide logistics support; and (3) lacks a centralized, customer-driven integrated framework in which to solicit feedback from its customers. Also, customer mail-out surveys are insufficient for identifying the causes of customer dissatisfaction. Finally, DLA is not yet making full use of best practice techniques, as discussed in this report, to identify and address customers’ concerns. To improve DLA’s ability to determine its customers’ needs, identify solutions for better meeting those needs, improve the supply readiness of military units, and improve the efficiency and effectiveness of depot maintenance repair activities, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to require the Director of DLA, as part of the agency’s customer relationship improvement efforts, to take the following actions: Develop a comprehensive plan for obtaining customer feedback that includes but is not limited to the following actions: Work with the military services to arrive at a mutually agreed determination of the military organizations that function as DLA “customers.” In doing so, both DLA and the services should identify officials accountable for providing and receiving customer feedback. Develop a customer feedback program that uses a variety of approaches such as those depicted in the best practices research discussed in this report. In developing this program, pilot tests could be used to determine which approaches meet agency and customer needs. Establish milestones for implementing the customer feedback program and for identifying the office accountable for its implementation. Integrate all customer feedback into an overall assessment to provide managers with a better understanding of customers’ perceptions and concerns. Establish a process for developing actions in response to issues that are identified from the customer feedback program and involve customers in that process. Establish processes for providing customers with information on actions that are being taken to address customer feedback issues. Improve the usefulness of its customer survey instruments by identifying ways to improve customer response rates, such as the use of effective follow-up procedures. Clarify guidance for customer support representatives to ensure that they are responsible for routinely contacting customers to obtain customer feedback. We also recommend that the Secretary of Defense direct the Secretaries of the Army, Navy, and Air Force to identify specific organizations that will be responsible for working with DLA in establishing a mutually agreed determination of those activities, organizations, and individuals that function as DLA “customers” and for working with DLA as it implements its customer feedback program. The Department of Defense provided written comments on a draft of this report, which are reprinted in their entirety in appendix II. DOD generally concurred with our recommendations and agreed that DLA needs to increase its focus on customer satisfaction. The department also noted that DLA is taking or is planning to take a number of actions to respond to our recommendations. For example, under DLA’s Customer Relationship Management program, DLA National Account Managers are to identify customer organizations in concert with their military service negotiating partners. In addition, DOD intends to use its Defense Logistics Executive Board as a forum to obtain input from each of the services on the specific organizations that will be responsible for working with DLA on customer feedback issues. Furthermore, DLA intends to better integrate customer feedback into an overall assessment and to improve its processes for providing customers with information on actions that are being taken to address customers’ issues. DOD did not agree with our recommended action that DLA develop a customer feedback program that uses a variety of approaches, such as those depicted in the best practices research discussed in this report. DOD stated that DLA’s use of feedback mechanisms should not be dictated by the best practices research we discussed. It further stated that DLA should continue to have the latitude to use its customer satisfaction measurement resources in the most efficient manner. Our discussion of best practice approaches was only intended to illustrate various techniques that some best practices organizations use to improve the ways they collect and analyze customer feedback. It was not our intent to prescribe specific approaches that DLA should use. Rather, we included examples of some of the approaches to best illustrate the concept of using multiple and integrated customer feedback approaches to better listen to customers’ opinions and concerns. We continue to believe that DLA’s customer feedback program could benefit from studying best practice organizations, such as those discussed in this report as well as others, to identify additional feedback approaches that could be pilot-tested and implemented to help strengthen its current customer feedback efforts. We are sending copies of this report to the Secretary of Defense; the Secretary of the Army; the Secretary of the Navy; the Secretary of the Air Force; the Commandant of the Marine Corps; the Director, Defense Logistics Agency; the Director, Office of Management and Budget; and other interested congressional committees and parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http:// www.gao.gov Please contact me on (202) 512-4412 if you or your staff have any questions concerning this report. Major contributors to this report are included in appendix III. To determine how customers perceived the quality of service they received, we examined customer satisfaction studies and surveys such as the Defense Logistics Agency’s (DLA) fiscal year 2000 and fiscal year 2001 quarterly satisfaction surveys and the Joint Staff Combat Support Agency Review Team’s 1998 and 2001 assessments. In addition, we performed a case study analysis using a judgmentally selected sample of DLA customers that included the use of structured interviews to identify customers’ perceptions and levels of satisfaction with DLA service. The details of our customer selection process, interview techniques, and sampling methodology follow: We initially selected customers using DLA-provided databases of its “top” military customers, which DLA primarily based on sales volume. DLA identified customers by Department of Defense Activity Address Codes (DODAACs) or military installation. We compiled the DLA information into a single database that included over 800 customer records accounting for about $5.6 billion of DLA’s total $7.8 billion nonfuel supply sales (about 72 percent) to the military services for fiscal year 1999, the most recent available data at the time of our review. We judgmentally selected customers from the database to maximize our coverage of the following significant variables: dollar sales, geographic location, DLA-defined customer type (i.e., deployed and deployable forces, industrial organizations, training activities, and the “other” segment), commodity type, and military service branch. We did not validate the accuracy of the DLA sales data, since the data’s purpose was to provide us with general customer sales activity. Because the DLA-provided customer DODAAC and installation data did not provide us with sufficient information about specific customer organizations and related points of contact, we held discussions with DLA and military service officials to further define customers and subsequently visited those customer organizations and activities. We conducted over 50 structured interviews with customers at more than 20 selected activities. We designed the interview questions on the basis of aspects of DLA’s supply process: submitting requisitions, following up on the status of open requisitions, contacting DLA for customer service, and receiving supplies. We also discussed other factors related to DLA support, such as the availability, price, and quality of DLA-provided supply items. Some customers did not express an opinion on the overall quality of customer service. Our initial sample of DLA customers included customers from more than 20 locations throughout the continental United States and overseas, covering multiple customer types within each military service. However, because of the September 11, 2001, terrorist attacks on the World Trade Center in New York, and the Pentagon in Washington, D.C., we did not complete our planned visits. As a result, we limited our visits to eight military service customer locations within the continental United States, as shown in figure 4. Our selection of customers included all four military services and each of the DLA customer types except for deployed forces. Because we did not draw a statistical sample and we limited our selection of customers, the results of our work cannot be projected to DLA as a whole. However, DLA surveys, Combat Support Agency Review Team assessments, and comments from DLA officials suggest that many of the issues we raise are systemic problems. To determine how useful the agency’s approaches are for obtaining customer service feedback, we met with DLA headquarters officials to discuss current processes and planned initiatives for measuring customer service and obtaining feedback. We also discussed with DLA customers, feedback mechanisms such as the use of DLA customer support representatives and quarterly surveys. We reviewed relevant reports, briefing documents, and other key information related to the agency’s processes and mechanisms for soliciting customer feedback. Additionally, we examined the agency’s customer feedback survey techniques and methods, such as the use of quarterly mail-out surveys and focus groups. Furthermore, we conducted an extensive literature search of best practice organizations to determine popular techniques for collecting customer feedback, and their advantages and disadvantages. To determine whether there are opportunities to enhance DLA’s initiatives to improve customer service, we performed a comparative analysis between DLA’s current practices and planned initiatives, and best practices that we identified through extensive literature searches. We reviewed related DLA planning documents and met with agency officials to discuss the agency’s plans. Through our literature search, we identified relevant research performed in the area of best practices in customer satisfaction. We reviewed a number of pertinent studies and held discussions with customer satisfaction experts from industry and academia to identify methods and techniques used in leading organizations to obtain meaningful feedback from their customers. We performed our work from March 2001 to June 2002 in accordance with generally accepted government auditing standards. Elizabeth G. Mead, Cary B. Russell, David R. Warren, Jeffrey A. Kans, Jack Kriethe, David Schmitt, Patricia Albritton, Brian G. Hackett, Latrealle Lee, and Stanley J. Kostyla also made significant contributions to this report.
The Defense Logistics Agency (DLA) performs a critical role in supporting America's military forces worldwide by supplying every consumable item--from food to jet fuel--that the military services need to operate. Although customers at the eight locations GAO visited were satisfied with some aspects of routine service, such as delivery time for routine parts and certain contractor service arrangements, customers also raised a number of points of dissatisfaction, particularly with regard to the detrimental impact of DLA's service on their operations. The agency's approach for obtaining customer service feedback has been of limited usefulness because it lacks a systematic integrated approach for obtaining adequate information on customer service problems. Although DLA has initiatives under way to improve its customer service, there are opportunities to enhance these initiatives to provide for an improved customer feedback program.
VHA offers eligible veterans a standard medical benefits package, including primary care. To receive these health care benefits, veterans must first complete VA’s enrollment application—the 1010 EZ—and submit it online, in person, by mail, or by fax to a VA medical center or VA’s Health Eligibility Center. Health Eligibility Center officials query several VA and Department of Defense databases to verify veterans’ eligibility for benefits and share this information with the applicable medical centers. If the Health Eligibility Center cannot make a determination as to veterans’ eligibility, officials notify veterans’ local medical centers to take further action, such as requesting additional documentation of military service records. The Health Eligibility Center sends a letter to each veteran once it has made an eligibility determination with the decision and a description of benefits. Veterans requesting on their enrollment applications that VA contact them to schedule appointments, if eligible, are to be placed on VHA’s New Enrollee Appointment Request (NEAR) list. (See fig.1 for an illustration of how newly enrolling veterans request on their enrollment applications that VA contact them to schedule appointments.) The NEAR list is intended to help VA medical centers track newly enrolled veterans needing appointments. It includes information regarding the medical center at which the veteran wants to be seen, contact information for the veteran, and whether the veteran is waiting to be contacted to schedule an appointment. If a veteran submits an application in person, medical center staff may schedule an appointment for the veteran at that time. Once the appointment is scheduled, the request is considered “filled” and the veteran’s name is removed from the NEAR list. According to VHA policy, as outlined in its July 2014 interim scheduling guidance, VA medical center staff should contact newly enrolled veterans to schedule appointments within 7 days from the date they were placed on the NEAR list. When contacted by the medical center, which may be by phone or letter, each veteran is scheduled for a 60-minute appointment based on the veteran’s preferred date—the date the veteran wants to be seen. Schedulers negotiate appointment dates with veterans using the preferred date and appointment availability. In July 2015, VA’s Health Resource Center began implementing a new program called “Welcome to VA.” Under this program, Health Resource Center staff located at central call centers are responsible for contacting each newly enrolled veteran within 5 days of the veteran’s enrollment date. Call center staff are to contact each veteran who submits an enrollment application and is determined eligible for health care, regardless of whether the veteran requests to be contacted on the application, to determine whether the veteran wants to schedule an appointment. To make an appointment, Health Resource Center staff are to provide the veteran with the phone number for his or her preferred VA medical center and connect the veteran with a local scheduler. Health Resource Center officials explained that although this program was running concurrently with the NEAR list process at the time of our review, the program will eventually replace the NEAR list process. When fully implemented, which is expected in spring 2016 according to Health Resource Center officials, medical centers would use a list generated by the Health Resource Center to contact veterans who request appointments. If VA medical center schedulers attempt to schedule appointments for new patients, including newly enrolled veterans, and no appointments are available within 90 days from when veterans would like to be seen, VHA policy requires that veterans be added to the electronic wait list. As appointments become available, schedulers contact veterans on the electronic wait list to schedule their appointments, at which time they are removed from the wait list. According to VHA policy, providers should document clinically appropriate return-to-clinic dates in the veterans’ medical records at the end of each appointment. Follow-up appointments requested by providers within 90 days of seeing a veteran should be scheduled before the veteran leaves the clinic. Follow-up appointments requested beyond 90 days are to be entered into the VA medical center’s Recall Reminder System. The recall system automatically notifies veterans, of the need to schedule a follow- up appointment. When a veteran receives an appointment reminder, he or she is asked to contact the clinic to make an appointment. Primary care appointments for established patients are generally scheduled for 30 minutes. Schedulers determine the date of each follow-up appointment based on the return-to-clinic date the provider documented in the veteran’s medical record. VHA’s July 2014 interim scheduling guidance established an appointment wait-time goal of 30 days for new patients based on the date each appointment was created (referred to as the create date) and 30 days for established patients based on each veteran’s preferred date. In October 2014, in response to the Choice Act, VHA eliminated the wait-time measure based on create date. It instituted a new wait-time goal of providing appointments for new and established patients not more than 30 days from the date that an appointment is deemed clinically appropriate by a VA health care provider, or if no such determination has been made, the veteran’s preferred date. VHA, VISNs, and VA medical centers each have responsibilities for developing scheduling and wait-time policies for primary care and for monitoring wait-time measures to ensure medical centers are providing timely access. The VHA Director of Access and Clinical Administration and VHA’s Chief Business Office have responsibilities for oversight of medical centers’ implementation of VHA’s enrollment and scheduling policies, including measuring and monitoring ongoing performance. Each VISN is responsible for overseeing the facilities within its designated region, including the oversight of enrollment, scheduling, and wait lists for eligible veterans. Finally, medical center directors are responsible for ensuring local policies are in place for the timely enrollment of veterans and for the effective operation of their primary care clinics, including affiliated community-based outpatient clinics and ambulatory care centers. In addition, the medical center director is responsible for ensuring that any staff who have access to the appointment scheduling system have completed the required VHA scheduler training. Our review of medical records for a sample of veterans at six VA medical centers found several problems in medical centers processing veterans’ requests that VA contact them to schedule appointments, and thus not all newly enrolled veterans were able to access primary care. For the 60 veterans in our review who had requested care, but had not been seen by primary care providers, we found that 29 did not receive appointments due to the following problems in the appointment scheduling process: Veterans did not appear on NEAR list. We found that although 17 of the 60 veterans in our review requested that VA contact them to schedule appointments, medical center officials said that schedulers did not contact the veterans because they had not appeared on the NEAR list. Medical center officials were not aware that this problem was occurring, and could not definitively tell us why these veterans never appeared on the NEAR list. For 6 of these veterans, VA medical center officials told us that when they reviewed the medical records at our request, they found that these veterans’ requests were likely filled, in error, by a compensation and pension exam. In these cases, officials had no record that these veterans had appeared on the NEAR list that schedulers used to contact veterans. Officials at one medical center explained that they encourage providers to discuss how to make an appointment with veterans at the end of the compensation and pension exam. For the remaining 11 veterans, after reviewing their medical records, officials were unable to determine why the veterans never appeared on the NEAR list. VA medical center staff did not follow VHA scheduling policy. We found that VA medical centers did not follow VHA policies for contacting newly enrolled veterans for 12 of the 60 veterans in our review. VHA policy states that medical centers should document three attempts to contact each newly enrolled veteran by phone, and if unsuccessful, send the veteran a letter. However, for 5 of the 12 veterans, our review of their medical records revealed no attempts to contact them, and medical center officials could not tell us whether the veterans had been contacted to schedule appointments. Medical centers attempted to contact the other 7 veterans at least once, but did not follow the process to contact them as outlined in VHA policy. For 24 of the 60 veterans who did not have a primary care appointment, VA medical center officials stated that scheduling staff were either unable to contact them to schedule an appointment or upon contact, the veterans declined care. Officials stated that they were unable to contact 6 veterans either due to incorrect or incomplete contact information in veterans’ enrollment applications, or to veterans not responding to medical centers’ attempts to contact them. In addition, VA medical center officials stated that 18 veterans declined care when contacted by a scheduler. These officials said that in some cases veterans were seeking a VA identification card, for example, and did not want to be seen by a provider at the time. The remaining 7 of the 60 veterans had appointments scheduled but had not been seen by primary care providers at the time of our review. Four of those veterans had initial appointments they needed to reschedule, which had not yet been rescheduled at the time of our review. The remaining three veterans scheduled their appointments after VHA provided us with a list of veterans who had requested care. Based on our review of medical records for a sample of veterans across the six VA medical centers in our review, we found the average number of days between newly enrolled veterans’ initial requests that VA contact them to schedule appointments and the dates the veterans were seen by primary care providers at each medical center ranged from 22 days to 71 days. (See table 1.) Slightly more than half of the 120 veterans in our sample were able to see a provider in less than 30 days; however, veterans’ experiences varied widely, even within the same medical center, and 12 of the 120 veterans in our review waited more than 90 days to see a provider. We found that two factors generally impacted veterans’ experiences regarding the number of days it took to be seen by primary care providers. First, appointments were not always available when veterans wanted to be seen, which contributed to delays in receiving care. For example, one veteran was contacted within 7 days of being placed on the NEAR list, but no appointment was available until 73 days after the veteran’s preferred appointment date. This veteran was placed on the electronic wait list per VHA policy, and a total of 94 days elapsed before the veteran was seen by a provider. In another example, a veteran wanted to be seen as soon as possible, but no appointment was available for 63 days. Officials at each of the six medical centers in our review told us that they have difficulty keeping up with the demand for primary care appointments for new patients because of shortages in the number of providers, or lack of space due to rapid growth in the demand for these services. Officials at two of the medical centers told us that because of these capacity limitations, they were placing veterans who requested primary care services on an electronic wait list at the time our review. Second, we found weaknesses in VA medical center scheduling practices may have impacted the amount of time it took for veterans to see primary care providers and contributed to unnecessary delays. Staff at the medical centers in our review did not always contact veterans to schedule an appointment according to VHA policy, which states that attempts to contact newly enrolled veterans to schedule appointments must be made within 7 days of their being added to the NEAR list. Among the 120 veterans included in our review, 37 veterans (31 percent) were not contacted according to VHA policy within 7 days to schedule an appointment, and compliance varied across medical centers. (See table 2.) We found some medical center processes for contacting newly enrolled veterans to schedule appointments were inconsistent with VHA policy and may have contributed to delays in scheduling newly enrolled veterans: VA officials at one medical center told us that they send letters to newly enrolled veterans who apply online, which inform the veterans that it is their responsibility to come into the medical center to complete enrollment and schedule appointments. According to VISN officials with oversight of this VA medical center, this practice is not consistent with VHA scheduling policies, and veterans should not be asked to come to medical centers to schedule their appointments. In one case, a veteran enrolled online and requested VA contact him to schedule an appointment, but according to medical center officials, the veteran was not called to schedule an appointment, although a letter was later sent. As a result, officials said he did not receive an appointment until he contacted the medical center to again ask for one 47 days later. At another medical center, we found that the medical center’s process for contacting newly enrolled veterans involves initial calls to explain their VHA health care benefits. After the initial call, each veteran’s name is sent to a scheduler to contact the veteran to schedule an appointment. Although officials indicated that initial outreach to the veterans in our review often occurred within 7 days of their addition to the NEAR list, these veterans were not always contacted again to schedule appointments within 7 days, in accordance with VHA’s scheduling policy. Finally, officials at a third medical center told us they added every new enrollee to the electronic wait list even when there were appointments available within 90 days of the veteran’s request. The VA medical center then used the electronic wait list rather than the NEAR list to identify veterans who needed to be contacted to schedule an appointment. For example, a veteran requested VA contact him to schedule an appointment, and was added to the electronic wait list. Rather than contacting the veteran within 7 days of being added to the NEAR list, in accordance with VHA policy, officials contacted the veteran 19 days later to schedule an appointment. Officials told us that they changed their process during our review and are now using the NEAR list to identify newly enrolled veterans who need appointments. Our review found that of 60 veterans who received follow-up primary care, most received care within 30 days of the return-to-clinic date determined by each veteran’s provider, in accordance with VHA’s policy. Our review found that for 51 veterans return-to-clinic dates were applicable and documented in their medical records and 38 of these veterans were seen by providers within 30 days of their return-to-clinic dates. However, the percentage of veterans seen within 30 days of their return-to-clinic dates varied across medical centers in our review. (See table 3.) We found several reasons why the 13 veterans (out of the 51 for whom return-to-clinic dates were applicable) were not seen for follow-up appointments within 30 days of their return-to-clinic dates: Improperly managed recall reminder process. For 6 of the 13 veterans, VA medical center staff did not properly manage their “recall reminder” process, which notifies veterans that they need to schedule a follow-up appointment, as outlined in VHA policy. Our review of the veterans’ medical records and discussions with medical center officials found that medical center staff did not place 5 veterans on the recall list to receive appointment scheduling reminders as outlined in VHA policy, and thus the veterans were not contacted to schedule their appointments in a timely manner. For the other veteran, one recall notice was sent, and schedulers did not attempt to make contact again, according to medical center officials. Lack of available appointments or veterans preferred later appointment dates. Four of the 13 veterans were seen more than 30 days beyond the return-to-clinic dates due to the lack of available appointments or based on their preferred dates. Cancellations and no-shows. For the remaining 3 of the 13 veterans, medical records indicated that appointments were initially scheduled within 30 days of the return-to-clinic dates; however, 2 veterans did not show up for their appointments and the other veteran’s appointment was canceled by the primary care clinic. These veterans were ultimately seen beyond the 30-day time frame. A key component of VHA’s oversight of veterans’ access to primary care, particularly for newly enrolled veterans, relies on monitoring appointment wait times. However, VHA monitors only a portion of the overall time it takes newly enrolled veterans to access primary care. VHA officials said they regularly review data related to access, including data on wait times for primary care. VHA has developed reports to track these data for each VISN and VA medical center. VHA officials indicated that they look for trends in average wait times across medical centers, and also track the percentage of veterans seen within 30 days of their preferred dates or return-to-clinic dates. Officials from all six VISNs and medical centers in our review said they use these reports, and other locally developed reports, to monitor wait times for each of their sites of care to identify any trends. VISN and VA medical center officials said if they find wait times are increasing, they work to identify solutions, which the medical center is then tasked with implementing. For example, officials from two VISNs and medical centers told us that in response to increasing wait times for primary care, actions have been taken to improve patient access, including opening new sites of care and hiring additional providers. We found, however, that VHA monitors only a portion of the overall time it takes newly enrolled veterans to access primary care, which is inconsistent with federal internal control standards. According to the internal controls for information and communications, information should be recorded and communicated to management and others within the entity who need it to carry out their responsibilities. However, VHA monitors access using veterans’ preferred appointment dates, which are not determined until schedulers make contact with veterans, as the basis for measuring how long it takes veterans to be seen, rather than the dates newly enrolled veterans requested on their enrollment applications that VA contact them to schedule appointments. (See fig. 2.) Therefore, VHA does not account for the time it takes to process enrollment applications, or the time it takes VA medical centers to contact veterans to schedule their appointments. Consequently, data used for monitoring and oversight do not capture veterans’ overall experiences, including the time newly enrolled veterans wait prior to being contacted by a scheduler, which makes it difficult for officials to effectively identify and remedy scheduling problems that arise prior to making contact with veterans. Our review of medical records for 120 newly enrolled veterans found that, on average, the total amount of time it took to be seen by primary care providers was much longer when measured from the dates veterans initially requested VA contact them to schedule appointments than it was when using appointment wait times calculated using veterans’ preferred dates as the starting point. (See table 4.) The amount of time elapsed between when veterans initially requested VA contact them to schedule appointments and when they are seen by providers may be due to veterans’ decisions such as not wanting to schedule appointments immediately, or cancelling and rescheduling initial appointments. However, we found the amount of time between initial requests and when they received care also varied due to factors unaffected by veterans’ decisions, including VA medical centers not contacting veterans in a timely manner, medical centers being unaware of veterans’ requests, and difficulties in processing veterans’ requests that they be contacted to schedule appointments. For example: One veteran applied for VHA health care benefits in December 2014, which included a request to be contacted for an initial appointment. The VA medical center contacted the veteran to schedule a primary care appointment 43 days later. When making the appointment, the medical center recorded the veteran’s preferred date as March 1, 2015, and the veteran saw a provider on March 3, 2015. Although the medical center’s data showed the veteran waited 2 days to see a provider, the total amount of time that elapsed from the veteran’s request until the veteran was seen was actually 76 days. For another veteran, the medical record indicated that a request to schedule an appointment was made in October 2014. According to VA medical center officials, the veteran had a compensation and pension exam, and as a result, this veteran was not on the list of those who needed to be contacted to schedule a primary care appointment. Officials told us that the veteran contacted the medical center in January 2015 to schedule an appointment, with a preferred date in January 2015. The veteran had his appointment in February 2015. While the medical center’s data show the veteran waited 13 days to be seen, the total amount of time that elapsed from the veteran’s initial request to schedule an appointment until the veteran was seen was 113 days. According to VHA officials responsible for monitoring wait times, there are no VHA policies requiring that they measure and monitor the total amount of time that newly enrolled veterans experience while waiting to be seen by a primary care provider. Instead, VHA’s policy is to use data that measure the timeliness of appointments based on veterans’ preferred dates. Although there is no policy requiring that they measure the total time veterans wait to be seen, officials from one VISN told us that they measure this period of time, as it may provide valuable insights into newly enrolled veterans’ experiences in trying to obtain care from VHA. During our discussions with these VISN officials, they expressed concern that monitoring veterans’ wait times using the preferred date is too limited, because it does not capture the full wait times veterans experience. Since February 2015, officials from this VISN have instructed each of the medical centers they oversee to audit a sample of 30 primary care, specialty care, and mental health appointments for new patients, including newly enrolled veterans, for a total of 90 appointments each month. As part of this audit, medical center officials record the dates veterans initially requested VA contact them to schedule appointments, the dates appointments were created, and the dates veterans were seen by providers. VISN officials use the information to prepare a monthly summary report which tracks a variety of information, including the percentage of appointments for which the veterans’ overall wait was more than 30 days. According to data from the October 2015 audit, 24 percent of veterans waited more than 30 days from their initial request until they were seen by a provider. Officials indicated that by analyzing trends on these and other data, they will be able to identify whether factors such as enrollment issues or problems contacting newly enrolled veterans are impacting overall wait times. Officials indicated that it is time-consuming to perform these audits, and it would be helpful if VHA had a centralized system which would enable them to electronically compile the data. During our review we also found that under the Health Resource Center’s Welcome to VA program, officials are developing a centralized electronic system to track various dates related to newly enrolled veterans, including the date each veteran applied for VHA health care. Once applications for benefits are approved, staff in the Health Resource Center call centers contact each newly enrolled veteran, and ask if that veteran wants to begin receiving health care at VHA. For veterans that indicated on their applications that they wanted to be contacted to schedule an appointment, their requests are confirmed through these calls, and the dates of the requests on the applications are recorded in the Health Resource Center system, as well as the dates the veterans were contacted. For veterans who did not indicate they wanted to be contacted on their applications, but tell Health Resource Center staff during the calls that they want care, the dates of contact are documented as their initial requests for care. Officials indicated that it is important to begin tracking from the onset of veterans’ requests, because that is when they told VA they needed care. Officials indicated that since July 2015, they have been piloting this Welcome to VA data collection and tracking effort with one VISN, and hope to expand this effort across the VHA system during 2016. They further indicated that they have been coordinating with the VHA office responsible for monitoring access, and hoped their data could be integrated into VHA’s routine monitoring of veterans’ wait times. Ongoing problems continue to affect the reliability of wait-time data, including for primary care, used by VHA, VISN, and VA medical center officials for monitoring and oversight. Our previous work in 2012, as well as that of VA and the VA OIG in 2014, has shown that VHA wait-time data are unreliable and prone to errors and interpretation. Among other things, we found in December 2012 that medical centers were not implementing VHA’s scheduling policies in a consistent manner, which led to unreliable wait-time data. Although VHA has taken steps since then to improve the reliability of its wait time data, including ensuring that scheduling staff complete required training, we found VHA schedulers were continuing to make errors in recording veterans’ preferred dates; and thus, data reliability problems continue to hinder effective oversight. During our review of appointment scheduling for 120 newly enrolled veterans, we found that schedulers in three of the six VA medical centers included in our review had made errors in recording veterans’ preferred dates when making appointments. Specifically, we found 15 appointments for which schedulers had incorrectly revised the preferred dates. In these cases, we recalculated the appointment wait time based on what should have been the correct preferred dates, according to VHA policy, and found the wait-time data contained in the scheduling system were understated. (See table 5.) We found that schedulers incorrectly revised patients’ preferred dates to later dates, inconsistent with VHA policy, under two scheduling scenarios: 1. Medical center primary care clinics cancelled appointments, and when those appointments were re-scheduled, schedulers did not always maintain the original preferred dates in the system, but updated them to reflect new preferred dates recorded when the appointments were rescheduled. This is not consistent with VHA policy, which indicates that if a clinic cancels an appointment, the original preferred date should be maintained in the system. 2. Preferred dates initially recorded when placing veterans on the electronic wait list were incorrectly revised to later dates when appointments became available and were scheduled. This included revising preferred dates to the same dates of the scheduled appointments. This is also inconsistent with VHA policy, which indicates that the veterans’ preferred dates recorded at the time of entry on the electronic wait list should not be changed. We confirmed our understanding of this policy with officials from one of the VISNs, and discussed these cases with VA medical center officials, who indicated that they would need to provide additional training to schedulers to ensure compliance with VHA’s scheduling policies. We also found in our review of medical records, that of 120 veterans who saw providers, 65 veterans, or 54 percent, had appointments with a zero- day wait time recorded in the scheduling system. VHA officials indicated that appointments with wait times of zero days are a potential indicator of scheduling errors. Based on our review of medical records for these veterans, 13 of the appointments with zero-day wait times were those that were incorrect due to schedulers revising preferred dates. In addition, officials from five of the six VA medical centers in our review told us they continue to find through their scheduling audits that schedulers are incorrectly recording preferred dates. Officials from each of the six medical centers explained that they periodically audit scheduled appointments to help ensure schedulers are complying with scheduling policies. Officials from these medical centers indicated a key focus of the audits is to assess whether schedulers are correctly recording the preferred date when making appointments, and that wait times are being calculated correctly. For example, officials from one medical center said they audited nearly 1,200 appointments between January and June 2015, and identified 205 appointments for which schedulers incorrectly recorded the veteran’s preferred date. Officials indicated that based on these results, scheduling supervisors provided training with those schedulers who made the errors. Since July 2014, VHA has issued a revised interim scheduling directive and numerous individual memos to clarify and update the scheduling policy, but has not yet published a comprehensive policy that incorporates all of these changes. Officials from four of the six VISNs in our review indicated that the way VHA has communicated revised scheduling policies and updates to medical centers has been ineffective and may be contributing to continued scheduling errors. They indicated that high turnover among schedulers and the lack of an updated standardized scheduling policy make it more difficult to train schedulers and to direct these staff to current policy, which increases the likelihood of errors. Federal internal control standards call for management to clearly document, through management directives or administrative policies, significant events or activities—which in this instance would include ensuring that scheduling policies are readily available and easily understood—and that management should use and communicate, both internally and externally, quality information to achieve its objectives. VHA officials acknowledged that they are aware of frustration among medical center staff, and that they have been working over the past 18 months to develop an updated and comprehensive scheduling policy. Officials indicated that their current target is to issue a revised policy some time in 2016. To help VA medical centers and VISNs identify scheduling problems, in January 2015, VHA implemented its scheduling trigger tool, which is designed to provide medical center and VISN officials with an early warning that scheduling problems may be occurring. According to VHA officials, the tool uses statistical analysis software to review appointment data from all medical centers in order to detect potential erroneous scheduling practices, including those that deviate from VHA policies. For example, it assesses whether medical center schedulers are accurately documenting patients’ preferred dates and whether they are using the electronic wait list correctly for new patients. The tool assesses each medical center’s scheduling performance and automatically alerts medical center and VISN leadership if a medical center is performing in the bottom 20 percent. According to VHA officials, use of the tool has prompted many requests for assistance, and they have provided additional scheduler training. VHA has implemented two system-wide efforts designed to offer veterans more timely access to primary care: the Veterans Choice Program, created through the Choice Act; and an initiative to increase primary-care hours. In addition to the VHA-wide initiatives aimed at improving access, officials from the VA medical centers in our review also reported implementing several local efforts to improve veterans’ timely access to primary care appointments. (See table 6.) Specifically, officials from all six medical centers reported reconfiguring or expanding clinic space. For example, officials at two medical centers stated that they are reconfiguring their primary care clinic’s space to accommodate additional providers and other staff without having to lease additional space. Officials from another medical center told us they were expanding clinic space by opening several additional community-based outpatient clinics by entering into emergency lease agreements in addition to beginning the construction of new clinic space. Further, officials from five medical centers in our review reported hiring additional providers or creating additional positions. For example, officials at one medical center stated that since 2013 they have hired 20 new full- time providers and 18.5 full-time equivalent nurses. Additionally, they have also created a new position—a “gap” provider who is a doctor, nurse practitioner, or nurse—that allows flexibility to cover short-term leave such as sick or annual leave or longer-term leave such as the gap between one provider leaving and a new provider coming on board. In practice, the medical center shifts gap providers from one location to another as needed, enabling the medical center to minimize backlogs that may arise due to staffing shortages and unanticipated provider absences. Currently, this medical center has seven gap providers in primary care. Similarly, two other medical centers reported using flexible providers who work across several clinic locations to improve access to primary care for veterans. Finally, officials from three of the medical centers included in our review reported developing technological solutions to improve access to timely primary care appointments. These solutions included increasing the use of telehealth and secure messaging to improve the convenience and availability of primary care appointments. For example, officials from one of the medical centers in our review said providers are using secure messaging to communicate with patients and reduce the need for in- person encounters, which they said helps free up appointments for other patients. Providing our nation’s veterans with timely access to primary care is a critical responsibility of VHA. As primary care services are often the entry point to the VA health care system for newly enrolled veterans, the ability to access primary care and establish a relationship with a VHA provider can be instrumental in the ongoing management of a veteran’s overall health care needs. Although VHA has processes for identifying those veterans who have requested VA contact them to schedule appointments, our review of a sample of newly enrolled veterans revealed that VA medical centers did not always provide that care until several months after veterans initially indicated interest in obtaining it, if at all. In several cases, newly enrolled veterans were never contacted to schedule appointments, due to medical center staff failing to comply with VHA policies for scheduling such appointments or medical center staff being unaware of veterans’ requests. In the absence of consistent adherence by medical center staff to VHA scheduling processes and policies, veterans may continue to experience delays in accessing care. To help oversee veterans’ access to primary care, officials at VHA’s central office, medical centers, and VISNs rely on measuring, monitoring, and evaluating the amount of time it takes veterans to be seen by a provider. The data currently being used to evaluate newly enrolled veterans’ access to primary care, however, are limited because they do not account for the entire amount of time between veterans’ initial requests to be contacted for appointments and being seen by primary care providers. This is because the method VHA uses to measure the appointment wait times for newly enrolled veterans does not begin at the point at which veterans initially request that VA contact them to schedule appointments when applying for VHA health care, but rather begins when VA medical center staff contact veterans and record the veterans’ preferred dates. Consequently, data used for monitoring and oversight do not capture the time newly enrolled veterans wait prior to being contacted by a scheduler, making it difficult for officials to effectively identify and remedy scheduling problems that arise prior to making contact with veterans. Recognizing limitations in monitoring and oversight of access data based on veterans’ preferred dates, some system-wide and local efforts are being developed and implemented to broaden data collection and oversight of newly enrolled veterans’ access to primary care; such efforts could have applicability across the entire VHA system. Ongoing scheduling problems continue to affect the reliability of wait-time data, including for primary care. Our previous work has shown that VHA wait-time data are unreliable due, in part, to medical centers not implementing VHA’s scheduling policies consistently. VHA central office officials have responded to scheduling problems throughout the VHA system by issuing several individual memorandums to clarify scheduling policies. However, VHA’s piecemeal approach in implementing these policies may not be fully effective in providing schedulers with the comprehensive guidance they need to consistently adhere to scheduling policies or providing the reliable data officials need for monitoring access to primary care. Our review of medical records for a sample of veterans found that scheduling errors continue, diminishing the reliability of data officials use for monitoring the timeliness of appointments by understating the amount of time veterans actually wait to see providers. Officials at several of the VA medical centers also continue to uncover scheduling errors through audits, and VISN officials attribute the errors, in part, to the lack of an updated comprehensive scheduling policy. While VHA central office officials are working on finalizing an updated scheduling policy, they currently have no definitive issuance date. Until a comprehensive scheduling policy is finalized, disseminated, and consistently followed by schedulers, the likelihood for scheduling errors will persist. We recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following three actions: (1) Review VHA’s processes for identifying and documenting newly enrolled veterans requesting appointments, revise as appropriate to ensure that all veterans requesting appointments are contacted in a timely manner to schedule them, and institute an oversight mechanism to ensure VA medical centers are appropriately implementing the processes. (2) Monitor the full amount of time newly enrolled veterans wait to be seen by primary care providers, starting with the date veterans request they be contacted to schedule appointments. This could be accomplished, for example, by building on the data collection efforts currently being implemented under the “Welcome to VA” program. (3) Finalize and disseminate a comprehensive national scheduling directive, which consolidates memoranda and guidance disseminated since July 2014 on changes to scheduling processes and procedures, and provide VA medical center staff appropriate training and support to fully and correctly implement the directive. We provided VA with a draft of this report for its review and comment. VA provided written comments, which are reprinted in appendix II. In its written comments, VA concurred with all three of the report’s recommendations, and identified actions it is taking to implement them. As arranged with your office, unless you publicly disclose the contents earlier, we plan no further distribution of this report until 24 days after the date of this report. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our prior work has found weaknesses in the Department of Veterans Affairs’ (VA) Veterans Health Administration’s (VHA) ability to effectively oversee timely access to health care for veterans. Specifically, we found that VHA did not have adequate data and oversight mechanisms in place to ensure veterans receive timely primary and specialty care, including mental health care. Since 2012, we have issued several reports and made recommendations to help ensure VHA has effective policies and reliable data to carry out its oversight. See table 7 for our previous recommendations and the status of their implementation. In addition to the contact named above, Janina Austin, Assistant Director; Jennie F. Apter; Emily Binek; David Lichtenfeld; Vikki L. Porter; Brienne Tierney; Ann Tynan; and Emily Wilson made key contributions to this report.
Primary care services are often the entry point for veterans needing care, and VHA has faced a growing demand for outpatient primary care services over the past decade. On average, 380,000 veterans were newly enrolled in VHA's health care system each year in the last decade. GAO was asked to examine VHA's efforts to provide timely access to primary care services. This report examines, among other things, (1) newly enrolled veterans' access to primary care and (2) VHA's related oversight. GAO interviewed officials from six VA medical centers selected to provide variation in factors such as geographic location, clinical services offered, and average primary care wait times; reviewed a randomly selected, non-generalizable sample of medical records for 180 newly enrolled veterans; and interviewed VHA and medical center officials on oversight of access to primary care. GAO evaluated VHA's oversight against relevant federal standards for internal control. GAO found that not all newly enrolled veterans were able to access primary care from the Department of Veterans Affairs' (VA) Veterans Health Administration (VHA), and others experienced wide variation in the amount of time they waited for care. Sixty of the 180 newly enrolled veterans in GAO's review had not been seen by providers at the time of the review; nearly half were unable to access primary care because VA medical center staff did not schedule appointments for these veterans in accordance with VHA policy. The 120 newly enrolled veterans in GAO's review who were seen by providers waited from 22 days to 71 days from their requests that VA contact them to schedule appointments to when they were seen, according to GAO's analysis. These time frames were impacted by limited appointment availability and weaknesses in medical center scheduling practices, which contributed to unnecessary delays. VHA's oversight of veterans' access to primary care is hindered, in part, by data weaknesses and the lack of a comprehensive scheduling policy. This is inconsistent with federal internal control standards, which call for agencies to have reliable data and effective policies to achieve their objectives. For newly enrolled veterans, VHA calculates primary care appointment wait times starting from the veterans' preferred dates (the dates veterans want to be seen), rather than the dates veterans initially requested VA contact them to schedule appointments. Therefore, these data do not capture the time these veterans wait prior to being contacted by schedulers, making it difficult for officials to identify and remedy scheduling problems that arise prior to making contact with veterans. Further, ongoing scheduling errors, such as incorrectly revising preferred dates when rescheduling appointments, understated the amount of time veterans waited to see providers. Officials attributed these errors to confusion by schedulers, resulting from the lack of an updated standardized scheduling policy. These errors continue to affect the reliability of wait-time data used for oversight, which makes it more difficult to effectively oversee newly enrolled veterans' access to primary care. GAO recommends that VHA (1) ensure veterans requesting appointments are contacted in a timely manner to schedule one; (2) monitor the full amount of time newly enrolled veterans wait to receive primary care; and (3) issue an updated scheduling policy. VA concurred with all of GAO's recommendations and identified actions it is taking to implement them.
VA provides care to veterans with mental health needs through its 150 VAMCs, which may include both specialty mental health care settings— including mental health clinics—and other settings that may provide mental health services but focus primarily on other types of care, such as primary care. VA has implemented a program to co-locate mental health care providers within primary care settings in an effort to promote effective treatment of common mental health conditions in the primary care environment while allowing mental health specialists to focus on veterans with more severe mental illnesses. According to VA, the prevalence of MDD in primary care settings among veterans being treated through VA is higher than that among the general population. MDD is characterized by the presence of depressed mood or loss of interest or pleasure along with other symptoms for a period of at least 2 weeks that represent a change in previous functioning. VA has policies and guidance in place related to treating veterans with MDD. For example, the Uniform Mental Health Services in VA Medical Centers and Clinics handbook (Handbook), which defines VA’s minimum clinical requirements for mental health services, requires that VA facilities provide evidence-based treatment through the administration of medication, when indicated, consistent with the MDD CPG. The CPG is guidance intended by VA to reduce current practice variation between clinicians and provide facilities with a structured framework to help improve patient outcomes. The MDD CPG provides evidence-based recommendations as guidance for clinicians who provide care for veterans with MDD. The MDD CPG includes approximately 200 recommendations to provide information and assist in decision making for clinicians who provide care for adults with MDD.assessments of depressive symptoms, such as the nine item Patient Health Questionnaire (PHQ-9), should be used at the initial assessment of MDD symptoms, to monitor treatment response at 4-6 weeks after initiation of treatment, after each change in treatment, and periodically For example, the CPG recommends that standardized thereafter until full remission is achieved.assessment is effective 4-6 weeks after initiation of treatment, making timely follow-up visits an important part of clinicians’ ability to assess whether the current treatment plan is effective or should be modified. According to the MDD CPG, veterans with MDD treated with antidepressants should be closely observed, particularly at the beginning of treatment and following dosage changes, to maximize veterans’ recovery and to mitigate any negative treatment effects, including worsening of depressive symptoms. The CPG should not take the place of the clinician’s clinical judgment. Beginning in June 2006, VA implemented several initiatives aimed at suicide prevention, including appointing a National Suicide Prevention Coordinator, developing data systems to increase understanding of suicide among veterans and inform VA suicide prevention programs, and instituting suicide prevention programs in VAMCs throughout the country. Additionally, VA Central Office established the Center of Excellence for Suicide Prevention and the Veterans Crisis Line in 2007. The Center of Excellence collects VA suicide prevention program data, which provides information on veteran suicide completions and suicide attempts for veterans receiving VA care, as well as those veterans not receiving VA care. VA’s Veterans Crisis Line provides toll-free, confidential support 24 hours per day for veterans, their families, and their friends through phone, online chat, or text message.Crisis Line fielded approximately 287,000 calls, 54,800 online chats, and 11,300 text messages. In fiscal year 2013, the Veterans As part of VAMCs’ suicide prevention programs, the Handbook requires each VAMC to have a suicide prevention coordinator whose responsibilities include establishing and maintaining a list of veterans assessed to be at high risk for suicide; monitoring these high-risk veterans; responding to referrals from staff and the Veterans Crisis Line; collaborating with community organizations and partners; training staff members who have contact with veterans at the VAMC, community organizations, and partners; and collecting and reporting information on veterans who die by suicide and who attempt suicide. See appendix II for more information on VAMCs’ tracking of veterans at high risk for suicide. VA Central Office uses several mechanisms to collect data on veteran suicides to help improve its suicide prevention efforts. One such mechanism includes data submitted by suicide prevention coordinators at VAMCs on known veterans who die by suicide. Beginning in December 2012, VA Central Office began a national initiative to collect demographic, clinical, and other related information on veteran suicides as a quality improvement initiative to improve its suicide prevention efforts by identifying information that can be used by VA Central Office to develop policy and procedures to help prevent future veteran deaths. This initiative, the Behavioral Health Autopsy Program (BHAP), replaced previous VA Central Office requirements to collect data on completed suicides. VA Central Office officials explained that they transitioned to the BHAP initiative to collect more systematic and comprehensive information about suicides, to incorporate interviews of family members of those who die by suicide, and to collect more contextual information. According to VA, the BHAP quality initiative has been adapted from a traditional psychological autopsy research framework that emphasizes the importance of information from outside sources as well as from those within the health care setting. The BHAP initiative is being implemented by VA in four phases: Phase 1—Standardized chart reviews: VAMCs’ suicide prevention coordinators are required to complete standardized chart reviews for all veterans’ suicides known to VAMC staff and reported on or after These reviews include specific information on a October 1, 2012.veteran’s utilization of VA health care services, as well as a veteran’s mental health diagnoses and risk factors for suicide. VA Central Office has instructed suicide prevention coordinators to use all available information, including VA medical records and information from a veteran’s family members to complete the chart review. These reviews are submitted to VA Central Office through completion of a BHAP Post-Mortem Chart Analysis Template (BHAP template) and VA Central Office has provided suicide prevention coordinators with a BHAP Guide on how to complete the fields in the BHAP template. VA Central Office requires VAMCs to submit the BHAP template within 30 days of VAMC staff becoming aware of a veteran’s death by suicide. Phase 2—Interviews with family members: In fiscal year 2013, VA Central Office began conducting interviews with family members of veterans who have died by suicide to obtain information on suicide risks, barriers to care, and suggestions for new programs to prevent suicide. Phase 3—Clinician questionnaire: This phase, which has not yet been implemented, will include an interview with the last provider that saw the veteran prior to his or her death. VA officials stated that there are no plans to begin this phase within calendar year 2014, and they have not established a future time table for implementing this phase. Phase 4—Public record review: This phase, which has also not been implemented, will be used to locate public records to identify stressors in the veteran’s life, such as a bankruptcy or divorce. Officials stated that there are no plans to begin this phase within calendar year 2014, and they have not established a future time table for implementing this phase. Since beginning the BHAP initiative, VA Central Office has internally issued two interim reports on data and trends from the submitted BHAP templates as part of Phase 1. The reports include information for veterans who died by suicide, both with and without a history of VA health care service utilization. Analyses of data on demographic characteristics, case information, period of service, and risk and protective factors were included for all veterans. Data on clinical characteristics and indicators of increased risk at the time of the veteran’s last contact with a VA provider were limited to veterans that utilized VA health care services. In addition to the BHAP initiative, VA also requires VAMCs to collect and submit data on suicide attempts and completions through the following mechanisms. Suicide Prevention Application Network (SPAN): Through SPAN, VAMCs submit information to VA Central Office on the number of veterans that completed suicide, the number of suicide attempts, and indicators of suicide prevention efforts, such as outreach events conducted each month by suicide prevention coordinators. Suicide behavior reports: VAMC clinicians must complete a suicide behavior report when they learn that a veteran attempted or completed suicide and add that report to the respective veteran’s medical record. This report includes the date and time of the event, and other observations related to the suicide attempt or completed suicide. According to VA policy, information from suicide behavior reports is used for National Patient Safety reporting requirements and to populate SPAN. Root cause analyses: Patient safety managers at VAMCs complete root cause analyses for suicide attempts and completed suicides under certain circumstances, such as when the attempt occurs at the VAMC during an inpatient stay or within 72 hours of being discharged from inpatient care. Root cause analyses are used to identify the factors that contributed to adverse events or close calls and any steps VAMCs could implement to prevent similar events in the future. See appendix III for how VAMC and VISN officials we interviewed told us they have utilized data related to suicides and suicide behavior. Data for fiscal years 2009 through 2013 show that about 10 percent of veterans who received health care services through VA were diagnosed with MDD, and of those, 94 percent were prescribed an antidepressant. However, due to diagnostic coding discrepancies we found in a sample of veterans’ medical records, VA’s data may not accurately reflect the prevalence of MDD among veterans. Based on our analysis of VA data from veterans’ medical records and administrative sources, 532,222 veterans—about 10 percent of veterans who received health care services through VA—had a diagnosis of MDD from fiscal years 2009 through 2013. Among those veterans, the majority (60 percent) were 35 to 64 years of age. Most (86 percent) were In addition, not veterans of the recent conflicts in Iraq and Afghanistan.most of these veterans were male (87 percent) and the highest proportion was white (68 percent) and non-Hispanic (87 percent). See table 1 for a summary of characteristics of veterans who had a diagnosis of MDD from fiscal years 2009 through 2013. We also found that about 499,000 of the 532,222 (94 percent) veterans who had a diagnosis of MDD from fiscal years 2009 through 2013 were prescribed at least one antidepressant. Of those veterans, the majority (about 73 percent) were dispensed a 12-week supply of an antidepressant at the start of an MDD episode. Fewer veterans (about 58 percent) were dispensed a 6-month supply of an antidepressant over the course of their treatment. Receiving a 12-week supply of an antidepressant can be important for addressing depressive symptoms initially, while continued treatment after remission of depressive symptoms, such as receiving a 6-month supply of an antidepressant, is associated with a decreased risk of relapse, according to the CPG. Based on our review of the documentation in 30 veterans’ medical records from VA’s medical record system, we found that over one-third (11) had diagnostic coding discrepancies. Specifically, these 11 veterans had at least one encounter where the clinician documented a diagnosis of MDD in the veteran’s medical record, but the clinician did not code the encounter accordingly. Instead, the clinician coded the encounter as “depression not otherwise specified,” a less specific code. According to the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders, depression not otherwise specified is to be used to code disorders with depressive features that do not meet criteria for MDD and other depressive disorders, or to indicate depressive symptoms about which there is inadequate or contradictory information. VA’s data on the number of veterans with MDD are based on the diagnostic codes associated with patient encounters, so the discrepancies we found indicate that the number of veterans with MDD is most likely not fully reflected in these data. Accurately identifying the veteran population with MDD is critical to assessing Department performance in treating veterans in accordance with the MDD CPG and measuring health outcomes for these veterans. VA Central Office reviewed the 11 medical records where we found coding discrepancies and agreed that the encounters were not coded accurately. According to a VA Central Office official, the encounters we identified were corrected in the veterans’ medical record. According to VHA Handbook 1907.03 - Health Information Management Clinical Coding Program Procedures, VAMCs are required to monitor the accuracy of coding and provide training as necessary in order to help ensure accurate coding. VAMC officials from all six sites in our review said that monthly or quarterly coding audits are conducted at their facilities and the findings of those audits are reviewed and action is taken to correct issues with the accuracy and reliability of coding. However, at five of the six VAMCs in our review, those audits focus on billable encounters—that is, encounters that are billed to a third party, such as private health insurance plans—in part because of the potential opportunity for facilities to collect third-party revenue from these encounters. Among the 11 veterans’ medical records where we identified coding discrepancies, all of the discrepancies were associated with outpatient, nonbillable encounters, the coding of which, according to a VA Central Office official, is not typically conducted by VAMC medical coders—staff who are trained specifically in medical coding terminology and standards and are responsible for coding inpatient admissions and discharges—or subject to coding audits. Diagnostic coding in VA’s medical record system for outpatient encounters is typically performed by clinicians. VISN officials and VA medical center clinicians we interviewed said that clinicians do not place a lot of importance on selecting a more precise diagnostic code because it does not significantly change the patient care that is provided or the type of treatment prescribed. In addition, in the interest of expediency, clinicians may select a previously used or frequently used diagnostic code for depression rather than take the time to search for a more precise code. For example, within the medical record, clinicians may access a list of previous or current diagnoses applicable to the veteran (commonly referred to as the “problem list”) or a list of frequently used diagnostic codes in the facility. According to VISN and VAMC officials, the problem list is not typically kept up to date by clinicians and as a result, MDD may not be listed and readily available for clinicians to select. As a result of our review, VA Central Office officials reported that they had discovered a software mapping error in VA’s medical record system where the selection of MDD as a diagnosis when using a keyword search function may result in the selection of the depression not otherwise specified diagnostic code by mistake. Officials stated that they anticipate that the software error—which applies to all VAMCs—would be fixed by November 2014. Officials also stated that the solution would apply only to those encounters coded from that point going forward and would not retroactively correct any coding discrepancies that may have occurred before the error was addressed. VA Central Office officials could not tell us if any of the 11 coding discrepancies that we identified were a result of this software error. Officials at most of the six VISNs we spoke with do not conduct reviews of medical coding done by clinicians. However, as a result of our inquiry, one VISN we interviewed reported in the late spring of 2014 that it had extracted data on MDD-related encounters and noticed the high use of depression not otherwise specified coding for the facilities within its VISN, as well as all VAMCs nationwide. Officials from this VISN said the lack of coding specificity has implications for being able to accurately examine health outcomes related to the treatment of depression and that they are planning to further analyze encounter data within their VISN to determine the appropriateness of diagnostic coding based upon medical record documentation. As of September 2014, the VISN had not reported any additional steps to address this issue. Based on the three CPG recommendations we selected, veterans in our review with MDD who have been prescribed antidepressants did not always receive care as recommended in the MDD CPG. Additionally, VA does not know the extent to which veterans with MDD who have been prescribed antidepressants are receiving care as recommended in the CPG, and VA Central Office has not developed mechanisms to determine the extent to which mental health care delivery conforms to the recommendations in the MDD CPG. We found that almost all of the 30 veterans with MDD who have been prescribed antidepressants included in our review did not receive care in accordance with the three MDD CPG recommendations we reviewed. VA policy states that antidepressant treatment must be consistent with VA’s current, evidence-based CPG. However, VA Central Office mental health officials were unable to tell us what it means to provide care that is consistent with the CPG, because, while a veteran’s treatment should be informed by the CPG recommendations, determining the extent to which the treatment is consistent with CPG recommendations would need to be done on a veteran-by-veteran basis. The CPG is intended to reduce practice variation and help improve patient outcomes, but without an understanding of the extent to which veterans are receiving care that is consistent with the CPG, VA may be unable to ensure that it meets the intent of the CPG and improves veteran health outcomes. Through our review of 30 medical records from the six VAMCs we selected, we found examples of deviations from the CPG recommendations for almost all veterans in our review.depicts the specific recommendations we reviewed and the number of veterans that did not receive care consistent with the corresponding CPG recommendation. VA does not know the extent to which veterans are receiving care consistent with the MDD CPG. While deviations from recommended practice may be appropriate in many cases due to clinician discretion, VA has not fully assessed whether these examples are acceptable deviations from the CPG. According to the federal internal control standard for risk assessment, agencies should comprehensively identify risks, assess the possible effects, if any, and determine what actions should be taken to mitigate any significant risks. VA Central Office has not developed a mechanism to fully identify deviations that could impede veterans’ recovery that may result when VAMCs do not provide care consistent with the MDD CPG. VA Central Office officials explained that the CPG recommendations are guidelines that clinicians can use to inform and guide clinical decision making. VA officials told us that VA cannot require the use of all recommendations in all cases; rather, CPG recommendations should be applied on a case-by-case basis based on the needs of the veteran and with clinician judgment. One official also said it would be difficult to check every CPG recommendation to ensure that clinicians are providing care consistent with the CPG, but stated that VA could identify for review those recommendations that may put veterans’ health at risk if not followed. However, with no mechanism to assess whether the care provided is consistent with the CPG, VA is unable to ensure that deviations from recommended care are identified. While monitoring full compliance with CPG recommendations may be difficult, there are nevertheless ways to address the issue. In fact, VA Central Office and some VAMCs have implemented mechanisms to determine the extent to which veterans are receiving care that is consistent with some of the CPG recommendations; however, these mechanisms do not fully assess all deviations that could impede a veteran’s recovery, as illustrated by the following. Officials at one VAMC we visited told us that they use a clinical tool to track veterans being treated for mental health conditions. The mental health tool includes 67,349 unique patients, and an official explained that they can run queries of the clinical tool—for example, for veterans participating in substance abuse treatment who did not return for a drug screen—by pulling both process and outcome variables including diagnostic codes, lab results, and medication lists. designed to manage the behavioral health needs of veterans through telephone or in-person visits. As part of the system, clinicians can use a structured interview—including a PHQ-9—that assesses veterans’ mental health symptoms in a way that is consistent with the CPG recommendation for follow-up assessment. Although the BHL can be used to help ensure care is provided consistent with a few of the recommendations in the CPG, the BHL is not used to monitor all veterans prescribed antidepressants. Generally, VAMCs use the BHL to monitor veterans being treated for mental health conditions, such as MDD, in primary care clinics, and to participate, veterans can be referred by their primary care clinician or request to participate. We found that demographic, clinical, and other data submitted to VA Central Office on veteran suicides were not always completely or correctly entered into the BHAP Post-Mortem Chart Analysis Templates— a mechanism by which VA Central Office collects veteran suicide data from VAMCs’ review of veterans’ medical records. (Figure 1 shows the number of BHAP templates we found with incomplete or inaccurate data.) Moreover, VAMCs interpreted and applied instructions for completing the BHAP templates differently. We also found that most VAMCs and VISNs we reviewed and VA Central Office did not review suicide data for accuracy. We found that over half of the 63 BHAP templates we examined had incomplete information.information, or other specific fields were omitted. Moreover, the data were lacking entirely for certain known veteran suicides. Incomplete data limits VA Central Office’s ability to identify information that can be used to help VA Central Office develop policy and procedures to prevent veteran deaths. The data either lacked veteran enrollment Lack of veteran enrollment information. Approximately one-third (23) of the BHAP templates we reviewed did not indicate whether the veteran was enrolled in VA health care services, even though the veteran had a VA medical record. Eight did not indicate that the veteran had received VA services when the templates were submitted by three of the VAMCs in our review, even though these VAMCs provided care to these veterans. Fifteen BHAP templates submitted by two VAMCs in our review originally indicated that the veteran was receiving VA care; however, when we reviewed the submitted BHAP templates we received from VA Central Office for the same 15 veterans, the BHAP templates did not indicate that the veteran was being seen in the VA. VA Central Office used enrollment information when compiling the most recent BHAP interim report, which is part of VA Central Office’s quality improvement efforts for its suicide prevention program. Specifically, VA Central Office included clinical data in the BHAP interim report only for veterans utilizing VA services. Therefore, clinical data for the 23 veterans we identified would not be included in the interim report. Missing one-third of the data from its analysis, as was the case in our sample, could have a detrimental effect on the trends VA Central Office reports and uses to improve its suicide prevention efforts. Requested data was omitted. Forty of the 63 BHAP templates we reviewed included various data fields where no response was provided, resulting in incomplete data. For example, for 19 templates, VAMC staff did not enter requested data as to whether the veteran had all or some of 15 active psychiatric symptoms within 12 months prior to the veteran’s date of death. Also, 9 templates did not include an answer for the number of previous suicide attempts by the veteran. Officials from one VAMC told us that they left this field blank if the veteran did not have any previous suicide attempts, rather than entering a “0,” even though the BHAP Guide states that officials should enter the appropriate number of previous suicide attempts. Officials at one VAMC told us that fields are sometimes left blank if the standardized answers available on the BHAP template are not adequate; that is, the answer for that veteran does not fit into one of the answers provided on the BHAP template. Officials at two VAMCs stated that it is sometimes easy to overlook fields in the BHAP template, resulting in unanswered questions. Filling in all fields in the BHAP template, rather than leaving the field blank, is important because some blank fields are counted as “missing” or “no” in the analysis conducted by VA Central Office for the BHAP interim reports. This, in turn, could affect the suicide trends reported. For example, for the number of previous suicide attempts, blank fields are counted as “missing” in the BHAP interim report, rather than “0” previous suicide attempts as officials from one VAMC intended. In other cases, such as for psychiatric symptoms, missing fields are counted as “no,” meaning that the veteran did not have these symptoms. In at least one BHAP template, the answer for the psychiatric symptom of isolation was left blank, and would therefore be counted as negative in the interim report despite the fact that officials from the one VAMC told us that the veteran did have this symptom. See figure 2, which provides an excerpt of the fields from the BHAP template in which VAMCs provided incomplete data. Data were lacking entirely for certain known veteran suicides. We found that VAMCs did not always submit BHAP templates for all veteran suicides known to the facility, as required by the BHAP Guide. VA Central Office does not have a process in place to determine whether it is receiving the BHAP templates for all known veteran suicides. For example, one VAMC had completed 13 BHAP templates at the time of our site visit but had not submitted them; however, neither the VAMC nor VA Central Office were aware that these templates had not been submitted until after we requested them from VA Central Office. The suicide prevention coordinator at this VAMC told us that the BHAP templates were forwarded to another official at the VAMC, rather than being submitted through VA Central Office’s process, and that the BHAP templates were never submitted. As a result of our inquiry, the VAMC submitted these templates to VA Central Office. In another example, officials at a different VAMC told us that, at the time of our site visit, they had recently begun completing and submitting BHAP templates, beginning with veteran suicides occurring in fiscal year 2014. VA Central Office officials told us that VAMCs can start submitting BHAP templates at any point, and officials are not requiring the VAMCs to go back and submit information on all suicides since October 1, 2012. However, this practice is contrary to VA policy, which states that VAMCs should submit BHAP templates for all suicides known to the facility and reported on or after October 1, 2012. Of the 63 BHAP templates we reviewed, we found numerous instances of inaccurate data submitted on BHAP templates, as illustrated by the following examples. Incorrect date of death: Six BHAP templates included a date of death that was incorrect based on information in the veteran’s medical record. The difference in the dates of death in the veterans’ medical records and the dates of death in the BHAP templates ranged from 1 day to 1 year. For example, one BHAP template indicated that the veteran died in the year after the veteran’s actual date of death. Another BHAP template appeared to use the date the suicide behavior report was completed, rather than the veteran’s actual date of death. The suicide behavior report was completed 69 days after the veteran’s date of death.in the BHAP template is important because it is used as a point of reference to calculate other fields, such as the number of mental health visits in the last 30 days. The accuracy of the date of death recorded Incorrect number of mental health visits: Nine BHAP templates included the incorrect number of outpatient VA mental health visits in the last 30 days. For example, one BHAP template indicated that the veteran had five outpatient mental health visits, including three non-mental health visits that should not have been included in the total number of mental health visits for this veteran. Another BHAP template indicated the veteran had been seen once by a mental health provider in the last 30 days; however, we found in reviewing the medical records that this veteran had not been seen by a mental health provider during this time period. This veteran would be included in the BHAP interim report as having a mental health visit, and, as a result, VA’s data would include an inaccurate count of the number of veterans with mental health visits in the last 30 days.accurate information, VA cannot use this information to determine whether policies or procedures need to be changed to ensure that Without veterans at high risk for suicide are being seen more frequently by a mental health provider to help prevent suicides in the future. See figure 3, which provides an excerpt of the fields from the BHAP template in which VAMCs provided inaccurate data. We found several situations where VAMCs interpreted and applied instructions for completing the BHAP templates differently, as illustrated in the following examples. We found inconsistencies in how different VAMCs arrived at answers provided in the BHAP templates. For example, one VAMC included a visit to an immunization clinic as the veteran’s final visit, while another VAMC did not include this type of visit, even though this was the last time the veteran was seen in person. The BHAP Guide indicates that the final visit should be the last time the veteran had in-person contact with any VAMC staff, but the BHAP Guide does not identify the different types of visits that should be counted. VA Central Office officials stated that a visit to an immunization clinic should be included as the final visit with the veteran. When VAMCs do not provide consistent data, VA Central Office will receive and use inconsistent data in preparing its trend reports, such as BHAP interim reports, which are intended to be used to improve suicide prevention efforts. We also found instances in which BHAP templates included information that did not conform to the instructions in the BHAP Guide on how to complete the BHAP medical record reviews. Last contact did not always represent the last time a VAMC official spoke with the veteran: The BHAP Guide instructions specify that the last contact recorded in the BHAP template should be the last recorded interaction with the veteran, which could be in person, through a phone call, or through email. Five of the 63 BHAP templates we reviewed did not indicate the last time an official spoke directly to the veteran. One BHAP template counted a phone call with a veteran’s spouse after the veteran’s death as the last contact with the veteran. The BHAP template also counted this phone conversation as an “in-person” interaction. The remaining four BHAP templates included a date for the last contact that was prior to the date for the veteran’s final in-person visit at the VAMC. In these instances, the veterans’ in-person visit should have been counted as the last contact. From this flawed information, VA would not be able to determine reliable trends for the amount of time between the last contact with the veteran and the veteran’s date of death for reports that it prepares, such as the BHAP interim report. Suicide prevention coordinator contact and referral not within BHAP time period: The BHAP Guide specifies that VAMCs should indicate in the BHAP template whether there was a suicide prevention coordinator contact or referral made within 3 months prior to the veteran’s date of death. In 3 of the 63 cases we reviewed, we found that the suicide prevention coordinators checked the box indicating that they saw the veteran or had a referral within 3 months of the veteran’s death. However, in each of these cases we found that the contact was made more than 3 months prior to the veteran’s death, so it should not have been counted. A suicide prevention coordinator from one VAMC said she was unaware of the time period requirement and a suicide prevention coordinator at another VAMC stated that time frames should be added to the BHAP template, rather than just included in the BHAP Guide. The BHAP interim reports include the number of veterans that had a suicide prevention coordinator contact or referral, and by including information on contacts or referrals that are outside the BHAP Guide time frame, these reports may be at risk of misreporting trends in this area. See figure 4, which provides an excerpt of the fields from the BHAP template in which VAMCs provided inconsistent data. VA policy and guidance states that the BHAP template should be completed for all suicides known to the facility, but at the five VAMCs we visited, these data were not always being reported. However, the policy and instructions do not explicitly state that veterans not being seen by VA should be included, and in the absence of this declaration, some VAMCs interpreted the instructions to mean that only veterans being seen by VA should be included in the data submitted. Therefore, two VAMCs have submitted data only for veterans being treated by VA, while the others include data on all known veteran suicides—whether they have been treated by VA or not. This further adds to the inconsistencies in the information that VAMCs submit on the BHAP templates. VA Central Office officials told us that BHAP templates should be completed for both veterans utilizing VA health care services, as well as those veterans not being seen in the VA, and that this requirement has been discussed at training sessions and during conference calls with suicide prevention coordinators. For example, during a suicide prevention conference in November 2013, a VA Central Office official informed participants that the BHAP template should be completed for all suicides reported through SPAN, which VA Central Office officials previously told us includes veterans that were not receiving VA care. The inconsistency in VAMC officials’ understanding of which veterans should have a completed BHAP template results in inconsistent data being reported to VA Central Office. While VA was in the process of updating its suicide prevention coordinator manual, we brought this issue to VA’s attention. In August 2014, VA made modifications to the manual that indicated that VA is changing its policy—now requiring that the BHAP template should be completed only for veterans receiving VA services. However, the guidance continues to be unclear on whether suicide prevention coordinators should complete BHAP templates for veterans not receiving VA care. We found that BHAP templates are not being reviewed by VA officials at any level for accuracy, completeness, and consistency. Therefore, our findings at five VAMCs could be symptomatic nationwide and other VAMCs may also be submitting incomplete, inaccurate, and inconsistent suicide-related information and VA may not be getting the data it needs across the Department to make appropriate resource decisions and develop new policy. VA policy states that it is the VISN’s and VAMC’s decision whether to conduct reviews of BHAP data prior to submission to VA Central Office. With few exceptions, VAMCs and VISNs we visited generally do not conduct data checks on the information submitted in the BHAP templates. Additionally, VA Central Office does not review the information for accuracy and completeness in the BHAP templates it receives. This approach is inconsistent with internal control standards for the federal government, which state that agencies should have controls over information processes, including procedures and standards to ensure the completeness and accuracy of processed data. Officials at one VAMC told us that VAMC staff compare the BHAP data and the veteran’s medical record prior to submitting the BHAP template to VA Central Office to ensure accuracy. In response to our review, another VAMC implemented a procedure to check the accuracy and completeness of their BHAP templates prior to submission. The procedure at this VAMC requires the suicide prevention coordinator and case manager to independently complete the BHAP template and compare their responses. The BHAP templates are then reviewed by the Assistant Mental Health Clinic Director prior to submission. We also found that VA lacks sufficient controls to ensure the quality of the existing BHAP data. For example, VA Central Office officials said there are no automated data checks to ensure the accuracy of data it uses for the BHAP interim report, such as checking to ensure that the date of last contact with the veteran that is recorded in the BHAP template is not after the veteran’s date of death. Although officials removed apparent duplicates in submitted BHAP templates by matching the veteran’s name and social security number while compiling the data for the most recent BHAP interim report, they do not conduct data checks to help identify some of the incomplete or inaccurate data we found in our review. Given the negative effects of MDD, it is important to provide timely, evidence-based treatment for veterans with MDD, and VA’s ability to monitor these veterans is critical to ensuring positive outcomes. However, our findings demonstrate that VA may not be fully aware of the population of veterans with MDD due to a lack of coding precision by clinicians. This can limit VA’s ability to assess the Department’s performance in treating veterans as recommended in the MDD CPG and in measuring health outcomes for veterans. Additionally, VA does not have mechanisms in place to ensure that the Department is able to identify deviations from CPG-recommended care and remedy those that could impede veterans’ recovery. Even if VA did have mechanisms in place, the coding discrepancies we identified would limit VA’s ability to extract accurate data on all veterans diagnosed with MDD, therefore hindering VA’s ability to determine the extent to which veterans are receiving care consistent with the CPG recommendations for MDD. The CPG recommendations are meant to improve veteran outcomes by providing maximum relief from the debilitating symptoms of MDD, and VA cannot ensure that the care veterans receive is consistent with those recommendations. The existence of incomplete, inaccurate, and inconsistent information submitted through VA’s BHAP templates limits the Department’s ability to accurately evaluate its suicide prevention efforts and identify trends in veteran suicides through the BHAP initiative. Specifically, data drawn from incomplete, inaccurate, and inconsistent BHAP templates limit the Department’s opportunities to learn from past veteran suicides and ultimately diminish efforts to improve its suicide prevention activities. VAMCs, VISNs, and VA Central Office generally lack a process to ensure that the data that are submitted and used by VA Central Office to identify trends in veteran suicides are complete, accurate, and consistent. Checking and verifying the data submitted to VA Central Office would help ensure that changes made to suicide prevention efforts by VAMCs, VISNs, and VA are based on actual trends in veteran suicides. Without clear VA Central Office instructions to guide how VAMCs and VISNs should complete BHAP templates and report suicide data, the validity of suicide data and the effectiveness of VA’s actions will be hampered. We recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following six actions: To more accurately estimate the prevalence of MDD and identify enrolled veterans with MDD, VA should identify the extent to which there is imprecise diagnostic coding of MDD by further examining encounters with a diagnostic code of depression not otherwise specified, which could be incorporated into VAMCs’ ongoing review of diagnostic coding accuracy, and determine and address the factor(s) contributing to the imprecise coding based on the results of those examinations. For example, feedback and additional training could be provided to clinicians regarding the importance of diagnostic code accuracy, or VA’s medical record could be enhanced to facilitate the selection of a more accurate diagnostic code. To ensure that veterans are receiving care in accordance with the MDD CPG, VA should implement processes to review data on veterans with MDD prescribed antidepressants to evaluate the level of risk of any deviations from recommended care and remedy those that could impede veterans’ recovery. To improve VA’s efforts to inform its suicide prevention activities, VA should ensure that VAMCs have a process in place to review data on veteran suicides for completeness, accuracy, and consistency before the data are submitted to VA Central Office, clarify guidance on how to complete BHAP templates to ensure that VAMCs are submitting consistent data on veteran suicides, and implement processes to review data on veteran suicides submitted by VAMCs for accuracy and completeness. We provided a draft of this report to VA for comment. In its written comments, reproduced in appendix IV, VA generally agreed with our conclusions and concurred with our recommendations. In addition, VA provided information on its plans for implementing each recommendation, with estimated completion dates in calendar year 2015. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report’s date. At that time, we will send copies of this report to the appropriate congressional committees; the Secretary of Veterans Affairs; the VA Under Secretary for Health; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix V. To describe the characteristics of veterans diagnosed with MDD from fiscal years 2009 through 2013, we analyzed Department of Veterans Affairs (VA) and Department of Defense (DOD) data. (See table 3.) These data included information on veterans’ demographic characteristics as well as clinical information on health care services and medications provided through VA. Veterans were classified as having a diagnosis of MDD if, in at least one fiscal year included in our review, they had two or more outpatient encounters or at least one inpatient hospital stay with a diagnosis of MDD. Specifically, we examined the following: Number of veterans diagnosed with MDD. We used a demographic file provided by VA to determine the number of veterans diagnosed with MDD. Characteristics of veterans diagnosed with MDD. We used demographic files provided by VA and DOD to describe characteristics of veterans diagnosed with MDD. In particular, the veteran characteristics we examined included the following: Age. We created seven categories for veterans’ ages as of September 30, 2013—the end of fiscal year 2013, which corresponds to the last date of data we included in our analysis. These categories are as follows: (a) 18-24, (b) 25-34, (c) 35-44, (d) 45-54, (e) 55-64, (f) 65-74, and (g) 75 and older. Era of service. We categorized veterans as either being veterans of the recent conflicts in Iraq and Afghanistan—Operation Iraqi Freedom, Operation Enduring Freedom, and Operation New Dawn—or of other eras of service. Sex. We categorized veterans as either being female or male. Race and ethnicity. We created categories to describe veterans’ race and ethnicity (Hispanic and non-Hispanic). These categories are consistent with the Office of Management and Budget’s 1997 Standards for Maintaining, Collecting, and Presenting Federal Data on Race and Ethnicity. Extent to which veterans diagnosed with MDD were prescribed at least one antidepressant. Using data from the Pharmacy Benefits Management database, we examined the extent to which VA providers prescribed at least one antidepressant for veterans diagnosed with MDD from fiscal years 2009 through 2013. This includes antidepressants prescribed to treat depression as well as those prescribed to treat other conditions. The percentage of veterans with MDD dispensed a 12-week and a 6-month supply of an antidepressant. Using VA data we obtained from the Medical SAS Inpatient Datasets, Acute Care Dataset; Outpatient Encounter Files; Fee Basis Outpatient and Inpatient Services Files; and Pharmacy Benefits Management Database, we calculated the percentage of veterans with MDD dispensed a 12-week and a 6-month supply of an antidepressant according to statistical programming logic provided by VA. These measures are intended to assess the effectiveness of antidepressant medication management and are based on performance measures developed by the National Committee for Quality Assurance. In addition, these measures are consistent with the VA/DOD Clinical Practice Guideline for Management of Major Depressive Disorder, which indicates that continued antidepressant treatment, after acute depressive symptoms have resolved, decreases the incidence of relapse of MDD. We selected six VA medical centers (VAMC) at the following locations to visit: Canandaigua, New York; Gainesville, Florida; Iowa City, Iowa; Philadelphia, Pennsylvania; Phoenix, Arizona; and Reno, Nevada. These VAMCs represent different facility complexity groups, serve populations of veterans that differ in terms of the extent of use of mental health services, and are located in different Veterans Integrated Service Networks (VISN), To gather additional perspectives, for each or regional networks of care.VAMC we visited, we selected one associated community-based outpatient clinic to visit. In particular, we visited community-based outpatient clinics in the following locations: Cedar Rapids, Iowa; Globe, Arizona; Gloucester, New Jersey; Lecanto, Florida; Fallon, Nevada; and Rochester, New York. (See table 4.) As part of our site visits, we reviewed a nongeneralizeable sample of five medical records for each of these six VAMCs for a total of 30 veterans. We reviewed these medical records to determine if the diagnostic code entered for all encounters— starting with the initial encounter in 2012 when the veteran was diagnosed with MDD and prescribed an antidepressant—was consistent with a diagnosis of MDD. To select medical records for review, we completed the following steps: Randomly generated a list of individuals with a new prescription for an antidepressant in calendar year 2012. Selected the first five individuals in the list that met the following Veteran status. Had a diagnosis of MDD in calendar year 2012. For the purposes of medical record reviews, we classified a veteran as having a diagnosis of MDD if, based on how the veteran’s patient care encounters were coded or on the narrative contained in clinical notes in the veteran’s medical record, the veteran had (a) at least two outpatient encounters with a diagnosis of MDD, or (b) at least one inpatient stay with a diagnosis of MDD. Had a new treatment episode of antidepressants in calendar year 2012. New treatment episodes were defined as an initiation of antidepressant treatment following a period during which the veteran was either (1) not prescribed an antidepressant or (2) noncompliant with and had not picked up prescriptions for a previously prescribed antidepressant. To ensure the reliability of the data we analyzed, we interviewed VA Central Office officials, reviewed relevant documentation and veterans’ medical records, and conducted electronic testing to identify missing data and obvious errors. On the basis of these activities, we determined that the data we analyzed were sufficiently reliable for our purposes. However, as discussed in the report, we described limitations of the data due to the coding discrepancies we found. To examine the extent to which VAMCs are providing care to veterans with MDD who are prescribed antidepressants as recommended in the CPG, we reviewed relevant VA policy documents. On the basis of that review, we found that VA policy requires all care sites, VAMCs, and community-based outpatient clinics to provide evidence-based antidepressant treatment when indicated for depression and that such care must be consistent with current VA clinical practice guidelines. The relevant VA clinical practice guideline, the VA/DOD Clinical Practice Guideline for Management of Major Depressive Disorder, provides evidence-based recommendations for providers on how to monitor veterans prescribed antidepressants; these recommendations are based on a review of depression research outcomes. These recommendations are based on available research at the time of publication of the guideline and are intended to provide information to assist providers in treatment decision-making.monitoring veterans prescribed antidepressants, we judgmentally selected three recommendations for inclusion in our review. In particular, we selected recommendations that (1) had among the highest strength of research evidence, (2) were sufficiently specific to enable us to determine the extent to which VA providers were following the recommendation, and (3) would not require clinical judgment to determine the extent to which VA providers were following the recommendation. The following recommendations were included in our review: From the guideline’s recommendations related to To enhance antidepressant treatment, veterans should be educated on when to take the medication, possible side effects, risks, and the expected duration of treatment, among other things; Standardized assessments of depressive symptoms, such as the Patient Health Questionnaire-9, should be used to monitor treatment response at 4-6 weeks after initiation of treatment, after each change in treatment, and periodically thereafter until full remission is achieved; and A plan should be developed that addresses the duration of antidepressant treatment, among other things. After selecting these recommendations for our review, we examined the extent to which veterans were receiving care consistent with these CPG recommendations at the six VAMCs we visited. To do this, we interviewed VAMC clinicians to determine whether and how they were following these recommendations. At each VAMC, officials interviewed included members of the executive leadership team, primary care and mental health providers, and pharmacists. Additionally, as part of our examination of the extent to which VAMCs are providing care consistent with the selected guideline recommendations, we reviewed the sample of five veterans’ medical records per VAMC used as part of our review of MDD coding. For each medical record, we reviewed documentation contained in the selected veterans’ medical records to assess the extent to which the antidepressant treatment- related care VA providers rendered was consistent with the selected CPG recommendations included in our review. Our review commenced with the encounter during which a VA clinician ordered an antidepressant to treat depressive symptoms. Our review ended after five follow-up encounters with a VA clinician, or sooner if the veteran did not have five follow-up encounters. Our review was limited to encounters during which the antidepressant treatment was reviewed, including encounters during which side effects and treatment effect were assessed, but no change was made to medication orders. We did not include, for example, an encounter with an orthopedist during which the fact that the veteran had been prescribed an antidepressant was simply noted. We provided the VAMCs with the instances where we found the medical record documentation was not consistent with the selected CPG recommendations. The VAMCs confirmed our answers or provided additional support if they believed the care was consistent with the CPG. To examine VA’s oversight of the care VAMCs provide to veterans with MDD who are prescribed an antidepressant, we reviewed VA’s oversight of the Uniform Mental Health Services in VA Medical Centers and Clinics handbook and CPG requirements and evaluated whether this oversight provides VA with adequate information to identify nonconformance with recommended practices, assess the risk of any nonconformance, and address nonconformance, as appropriate. As part of this review, we reviewed VA’s oversight in the context of federal standards for internal control for risk assessment.refers to an agency’s ability to comprehensively identify risks, assess the possible effect, if any, and determine what actions should be taken to mitigate significant risks. The internal control for risk assessment We then interviewed officials from VA Central Office, including officials from the Office of Mental Health Services (OMHS), the Office of Mental Health Operations, and the Office of Analytics and Business Intelligence; and the six VISNs that oversee the VAMCs we visited who are responsible for overseeing compliance with VA’s requirements, including VA’s requirement that all VA facilities provide evidence-based antidepressant treatment when indicated for depression and that such care be consistent with current VA clinical practice guidelines. Through our interviews, we obtained information on the oversight activities conducted by VA Central Office and the extent to which VA Central Office followed up with VAMCs to ensure that they corrected problems identified through these oversight activities. In addition, we obtained and reviewed relevant documents regarding VA oversight, including internal reports and VAMCs’ plans to correct problems identified through oversight activities. To analyze the information VA requires VAMCs to collect on veteran suicides, we first reviewed VA policies, guidance, and documents related to VA’s suicide prevention efforts to identify the mechanisms by which VA collects veteran suicide data from VAMCs. We also interviewed VA Central Office and other officials responsible for VA’s suicide prevention program, including officials from OMHS and the Center of Excellence for Suicide Prevention. We also interviewed VAMC officials and relevant staff of the six VISNs for the sites we visited to obtain information on suicide prevention initiatives. Next, through the site visits to six VAMCs, we obtained documents and interviewed officials regarding the collection of veteran suicide data. We obtained all completed templates from the Behavioral Health Autopsy Program (BHAP) related to VA’s collection of data on veterans that died by suicide as of the time of our site visit or at the time we requested the One VAMC had not completed any of documents for virtual site visits.these BHAP documents because they had not had a veteran die by suicide since the beginning of the program. Therefore, our analysis includes a review of documents from five of the six VAMCs we visited. Through review of the documents, we noted any fields missing data, such as a field that requires a yes or no answer but neither answer is provided. Additionally, using professional judgment, we identified fields in the documents to review based on whether the field related to aspects of VA treatment—including treatment for mental health conditions—and the date of the veteran’s death. We identified these fields because they did not require clinical judgment to assess. Using the parameters in the corresponding guide for filling out these documents, including time frames, we compared these fields to information included in the veteran’s medical record and noted differences between our answers and the answers provided by the VAMCs in the documents. To ensure that we received the final, submitted versions, we also requested these documents from VA Central Office for each of the five VAMCs. We compared these documents to the documents we received from the VAMCs. We used the documents from the VAMCs as the starting point; therefore, we only analyzed the templates for veterans identified by the VAMCs. that the template for these documents had changed over time. If additional fields were included in the templates obtained from VA Central Office, but were not originally included in the templates obtained from the VAMCs, we did not review these fields. We generally used the answer from the document obtained from VA Central Office, which is the final submitted version, unless a field originally had an answer in the template from the VAMC, but was blank or not answered in the template from VA Central Office. In those cases, we used the answer from the VAMC document. During the course of our review we learned We provided the VAMCs with the fields where the answers in the VAMC’s documents did not match our answers based on our review of the medical record. The VAMCs confirmed our answers or provided additional support for their original answer. Results from our review of veteran suicide data can be generalized to the VAMCs we visited, but cannot be generalized to other VAMCs. We received additional templates from VA Central Office, but these were not analyzed because the VAMC had not provided us with templates for these veterans. We conducted this performance audit from November 2013 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Department of Veterans Affairs’ (VA) Uniform Mental Health Services in VA Medical Centers and Clinics handbook (Handbook) requires VA medical centers (VAMC) to have a suicide prevention coordinator whose responsibilities include establishing and maintaining a list of veterans assessed to be at high risk for suicide and monitoring these veterans. The Handbook also requires suicide prevention coordinators to ensure that providers follow up on missed appointments for high-risk veterans to ensure patient safety and in order to initiate problem-solving about any tensions or difficulties in the veteran’s ongoing care. Whether a veteran is determined to be at high risk for suicide is based on clinical judgment made after an evaluation of risk factors—such as history of past suicide attempts or recent discharge from an inpatient mental health unit—protective factors—such as positive social support, positive coping skills, and positive therapeutic relationships—and the presence or absence of warning signs. Indicators that a veteran is at high risk for suicide include a current verified report or witnessed suicide attempt; identification of current serious suicidal ideation that requires an immediate change in the treatment plan, such as hospitalization; and the presence of any of the following warning signs: threatening to hurt or plan to kill oneself; looking for specific ways to kill oneself and seeking access to such means, such as pills or weapons; and talking or writing about death, dying, or suicide when these actions are out of the ordinary for the person. The Handbook requires each VAMC to have a process for establishing a patient record flag to help ensure that veterans determined to be at high risk for suicide are provided with follow up for all missed mental health and substance abuse appointments. The primary purpose of the patient record flag is to communicate to staff that a veteran is at high risk for suicide and VA policy states that the presence of a flag should be considered when making treatment decisions. Suicide prevention coordinators are responsible for assessing, in conjunction with the treating clinician, the risk of suicide in individual veterans, ensuring these veterans have a “High Risk for Suicide” Patient Record Flag on their medical record, and reviewing the list of high-risk veterans at least every 90 days. We interviewed suicide prevention coordinators as part of our site visits with six VAMCs to obtain information on how they track veterans determined to be at high risk for suicide. At four of the VAMCs we visited, suicide prevention coordinators used an electronic spreadsheet to track information on these veterans. For example, the spreadsheets include information such as whether the veteran has a patient record flag on their medical record and when the flag needs to be reviewed, the date for the veteran’s next scheduled follow-up appointment, whether the veteran has a safety plan, and the veteran’s assigned psychiatrist. Officials from one VAMC told us that they maintain the list daily, adding and removing veterans as necessary. Officials stated that the circumstances under which a veteran would be removed from the spreadsheet varied, but veterans are generally removed because their patient record flag has been removed and the officials no longer consider the veteran to be at high risk for suicide. The two remaining VAMCs use other mechanisms to track veterans at high risk for suicide. Officials from one VAMC told us that they use the Suicide Prevention Application Network (SPAN) to query high-risk patients at the VAMC. The SPAN database contains veteran information, demographic characteristics, and information on suicide attempts and completed suicides, among other things. According to officials, the information for each veteran in SPAN includes the date the veteran was assessed as being at high risk, as well as the date that the veteran needs to be seen for follow up, if applicable. After our site visit, officials told us they plan to periodically pull a list of all veterans with an active high risk flag in VA’s medical record for the VAMC and cross-reference that list to veterans being tracked for high suicide risk by the suicide prevention coordinator in SPAN to ensure all high-risk veterans are tracked. Officials from the other VAMC told us that their case managers each have their own list of veterans that they track and the suicide prevention coordinator we spoke with stated that he does not keep a master list of all veterans that are at high risk for suicide. Veterans Affairs medical centers (VAMC) collect and submit data on veteran suicides to the Department of Veterans Affairs (VA) Central Office through the Suicide Prevention Application Network (SPAN) and suicide behavior reports. VAMCs also collect and report data through root cause analyses. Additionally, VA Central Office uses the data from SPAN to prepare reports that are sent to the VAMCs and Veterans Integrated VA Central Office officials stated that they Service Networks (VISN).expect VAMCs and VISNs to use these reports and collected data to improve suicide prevention efforts and program evaluation. Through site visits at six VAMCs that we conducted as part of our review and through interviews with corresponding VISN officials, we identified examples of how some VAMCs and VISNs are utilizing veteran suicide data to improve their suicide prevention efforts. VAMC and VISN officials have used SPAN to create initiatives based on trends in the data. For example, Officials at one VAMC stated that they use the information collected in SPAN to provide data for performing statistical analyses on the outreach conducted, to study suicide attempts and completions across the VAMC catchment area, to understand the means by which veterans are dying by suicide, and to study the use of high-risk flags. Officials at a VISN explained that through a review of the SPAN data about a year ago, officials learned that 60 percent of suicides in the VISN were completed using a gun. After conducting research on the subject, the VISN began a firearm safety initiative, which includes notifying veterans by mail that they can receive four gun locks each upon request, with no questions asked. VAMC officials have made programmatic changes to their suicide prevention efforts based on trends in the suicide data they are collecting and reviewing. For example, At one VAMC, officials told us that they reviewed suicide behavior reports and, as a result of trends identified in these reports, drafted a policy for medication restriction for veterans at risk of overdosing. Specifically, over a 3-year period, five or six veterans receiving VA care repeatedly attempted suicide by overdose, typically when they were intoxicated. VAMC officials created a work group to draft policy that mitigates risk for medication overdose among high-risk veterans. At the time of our site visit, the group was exploring creating a patient record flag that would be included in the veteran’s medical record for overdose risk indicating that medication supplies should be restricted for these veterans and the possibility of using automated pill dispensers to dispense medications to these veterans. Through their work reviewing suicide-related information, the suicide prevention team at another VAMC identified a trend in its suicide data. In particular, they noted that some veterans were given a 90-day supply of the same medications that the veteran recently tried to use to overdose. The suicide prevention team mentioned this to a clinical pharmacist who had also noticed this issue. The VAMC is now trying to restrict days of supply for these types of veterans, but there is no formal policy about this and no plans to craft such a policy. Additionally, officials from this VAMC stated that they have added items to the standardized suicide behavior report template to help them to collect additional useful information, such as active medications and pain score at the time of the last visit. Officials from one VAMC stated that through a review of medical records and autopsy reports for veterans who died by suicide, they found that a vast majority of veterans who died by suicide were not being seen by a mental health provider. In response, officials provided education to primary care providers. VAMC officials also noticed that veterans receiving care for pain were dying by suicide at a high rate. As a result, the VAMC has started an initiative with the pain clinic, and, as part of this initiative, the chief of the pain management clinic consults with psychiatry on veterans at risk for suicide. Officials at a VISN described changes made in response to the suicide data in fiscal year 2012, which showed that a percentage of veterans who completed suicide had no ongoing mental health care. These veterans mainly received care from VA primary care providers. To address this, the VISN partnered with the Center of Excellence for Suicide Prevention and local university psychologists to help VA primary care providers at community-based outpatient clinics formulate mental health plans. Randall B. Williamson, (202) 512-7114, williamsonr@gao.gov. In addition to the contact named above, Marcia A. Mann, Assistant Director; Emily Binek; Muriel Brown; Stella Chiang; Cathleen Hamann; Melanie Krause; Daniel Lee; Lisa Opdycke; Sarah Resavy; and Jennifer Whitworth made key contributions to this report.
In 2013, VA estimated that about 1.5 million veterans required mental health care, including services for MDD. MDD is a debilitating mental illness related to reduced quality of life and productivity, and increased risk for suicide. VA also plays a role in suicide prevention. GAO was asked to review how VA tracks veterans prescribed antidepressants and what suicide data VA uses in its prevention efforts. This report examines (1) VA's data on veterans with MDD, including those prescribed an antidepressant; (2) the extent that veterans with MDD who are prescribed antidepressants receive recommended care and the extent to which VA monitors such care; and (3) the quality of data VA requires VAMCs to collect on veteran suicides. GAO analyzed VA data, interviewed VA officials, and conducted site visits to six VAMCs selected based on geography and population served. From each of these six VAMCs, GAO also reviewed five randomly selected medical records for veterans diagnosed with MDD and prescribed an antidepressant in 2012, as well as all completed BHAP templates. The results cannot be generalized across VA but provide insights. GAO's analysis of Department of Veterans Affairs (VA) data for fiscal years 2009 through 2013 shows that about 10 percent of veterans who received VA health care services were diagnosed with major depressive disorder (MDD). MDD is characterized by depressed mood or loss of interest along with other symptoms for 2 weeks or more that represent a change in the way individuals function from their previous behaviors. Because GAO found diagnostic coding discrepancies in 11 of the 30 veterans' medical records it reviewed from six VA medical centers (VAMC), VA's data may understate the prevalence of MDD among veterans being treated through VA, to the extent that such discrepancies may permeate VA's data. One treatment for MDD is the use of medications such as antidepressants. According to GAO's analysis, 94 percent of veterans diagnosed with MDD were prescribed at least one antidepressant. VA policy states that antidepressant treatment must be consistent with VA's current clinical practice guideline (CPG); however, GAO's review of 30 veterans' medical records identified deviations from selected MDD CPG recommendations for most veterans reviewed. For example, 26 of the 30 veterans were not assessed using a standardized assessment tool at 4 to 6 weeks after initiation of treatment, as recommended in the CPG. Additionally, 10 veterans did not receive follow up within the time frame recommended in the CPG. GAO found that VA does not have a system-wide process in place to identify and fully assess whether the care provided is consistent with the CPG. As a result, VA does not know the extent to which veterans with MDD who have been prescribed antidepressants are receiving care as recommended in the CPG and whether appropriate actions are taken by VAMCs to mitigate potentially significant risks to veterans. The demographic and clinical data that VA collects on veteran suicides were not always complete, accurate, or consistent. VA's Behavioral Health Autopsy Program (BHAP) is a quality initiative to improve VA's suicide prevention efforts by identifying information that VA can use to develop policy and procedures to help prevent future suicides. The BHAP templates are a mechanism by which VA collects suicide data from VAMC's review of veteran medical records. GAO's review of 63 BHAP templates at five VAMCs found that 40 of the templates that VAMCs submitted to VA Central Office had incomplete data. Also, GAO found that the BHAP templates VAMCs submitted contained inaccurate data. For example, 6 BHAP templates included a date of death that was incorrect based on information in the veteran's medical record, and 9 BHAP templates included an incorrect number of outpatient VA mental health visits in the last 30 days. Moreover, GAO found that VAMCs submitted inconsistent information because they interpreted VA's guidance on completing the BHAP templates differently. This situation was further exacerbated because BHAP templates prepared by VAMCs are generally not being reviewed at any level within the Department for completeness, accuracy, and consistency. Lack of complete, accurate, and consistent data and poor oversight can inhibit VA's ability to identify, evaluate, and improve ways to better inform its suicide prevention efforts. GAO recommends that VA identify and address MDD coding discrepancies; implement processes to review data and assess deviations from recommended care; and implement processes to improve completeness, accuracy, and consistency of veteran suicide data. VA concurred with GAO's recommendations and described its plans to implement them.
SSA administers two federal programs under the Social Security Act that provide benefits to people with disabilities who are unable to work: The DI program provides cash benefits to workers with disabilities and their dependents based on their prior earnings. The SSI program provides benefits to the elderly and individuals with disabilities if they meet the statutory test of disability and have income and assets that fall below levels set by program guidelines. The DI program was established in 1956 to provide monthly cash benefits to individuals who were unable to work because of severe long-term disability. SSA pays disability benefits to eligible individuals under Title II of the Social Security Act. An individual is considered eligible for disability benefits under the Social Security Act if he or she is unable to engage in any SGA because of a medically determinable impairment that (1) can be expected to result in death or (2) has lasted (or can be expected to last) for a continuous period of at least 12 months. To be eligible for benefits, individuals with disabilities must have a specified number of recent work credits under Social Security (specifically, working 5 out of the last 10 years or 20 quarters out of 40 quarters) at the onset of medical impairment. An individual may also be able to qualify based on the work record of a deceased spouse or of parent who is deceased, retired, or considered eligible for disability benefits, meaning one disability beneficiary can generate multiple monthly disability payments. Benefits are financed by payroll taxes paid into the Federal Disability Insurance Trust Fund by covered workers and their employers, based on each worker’s earnings history. Individuals are engaged in SGA if they have earnings above $940 per month in calendar year 2008 or $980 per month in calendar year 2009. SSA conducts work-related continuing disability reviews (CDR) to determine if beneficiaries are working at or above the SGA level. Each beneficiary is allowed a 9-month trial work period, during which the beneficiary is permitted to earn more than the SGA level without affecting his or her eligibility for benefits. The trial work period is one of several provisions in the DI program intended to encourage beneficiaries to resume employment. Once the trial work period is completed, beneficiaries are generally ineligible for future DI benefits unless their earnings fall below the SGA level during the 36-month extended period of eligibility (EPE). Work issue CDRs are triggered by several types of events, although most are generated by SSA’s Continuing Disability Review Enforcement Operation. This process involves periodic computer matches between SSA’s administrative data and Internal Revenue Service (IRS) wage data. Work CDRs can also be triggered by other events. For example, SSA requires beneficiaries to undergo periodic medical examinations to assess whether they continue to be considered eligible for benefits. During such reviews, SSA’s staff sometimes discovers evidence that a beneficiary may be working and usually forwards the case to an SSA field office or program service center for earnings/work development. Additional events that may trigger a work CDR include reports from state vocational rehabilitation agencies, reports from other federal agencies, and anonymous tips. Finally, DI beneficiaries may voluntarily report their earnings to SSA by visiting an SSA field office or calling the agency’s toll- free number. SSA had increased work-related CDRs from about 106,500 in fiscal year 2003 to about 175,600 in fiscal year 2006. However, the number of work CDRs has decreased slightly since 2006, and SSA projects that it will conduct about 174,200 work CDRs in fiscal year 2010. Created in 1972, the SSI program is a nationwide federal cash benefit program administered by SSA that provides a minimum level of income to financially needy individuals who are aged, blind, or considered eligible for benefits because of physical or mental impairments. Payments under the SSI program are paid under Title XVI of the Social Security Act and are funded from the government’s General Fund, which is financed through tax payments from the American public. Individuals are not eligible for SSI payments for any period during which they have income or resources that exceed the allowable amounts established under the Social Security Act. In addition, relevant information will be verified from independent or collateral sources to ensure that such payments are correct and are only provided to eligible individuals. SSI recipients are required to report events and changes of circumstances that may affect their eligibility and payment amounts, including changes in income, resources, and living arrangements. SSI generally reduces the monthly benefit $1 for every $2 of monthly earnings after the first $85. SSA has implemented measures to help identify SSI recipients with excess income, excess resources, or both, such as periodically conducting redeterminations to verify whether recipients are still eligible for and receiving the correct SSI payments. A redetermination is a review of a recipient’s nonmedical eligibility factors, such as income, resources, and living arrangements. There are two types of redeterminations: scheduled and unscheduled. Scheduled redeterminations are conducted periodically depending on the likelihood of payment error. Unscheduled redeterminations are conducted based on a report of change in a recipient’s circumstances or if SSA otherwise learns about a change that may affect eligibility or payment amount. SSA has deferred a significant number of SSI redeterminations since fiscal year 2003. Although SSA increased the number of SSI redeterminations in fiscal year 2009 above the 2008 level, the number of reviews remains significantly below the fiscal year 2003 level. Specifically, SSA conducted about 719,000 SSI redeterminations in fiscal year 2009, 30 percent fewer than it did in fiscal year 2003. However, if SSA completes the number of SSI redeterminations it is projecting for fiscal year 2010, it will be close to the fiscal year 2003 level. Our overall analysis found thousands of federal employees, commercial drivers, and owners of commercial vehicle companies who were receiving Social Security disability benefits during fiscal year 2008. It is impossible to determine from data mining alone the extent to which beneficiaries improperly or fraudulently received disability payments. To adequately assess an individual’s work status, a detailed evaluation of all the facts and circumstances should be conducted. This evaluation would include contacting the beneficiary and the beneficiary’s employer, obtaining corroborating evidence such as payroll data and other financial records, and evaluating the beneficiary’s daily activities. Based on this evaluation, a determination can be made if the individual is entitled to continue to receive SSA disability payments or have such payments suspended. As such, our analysis provides an indicator of potentially improper or fraudulent activity related to federal employees, commercial drivers, and owners of commercial vehicle companies receiving SSA disability payments. Our case studies, discussed later, confirmed some examples in which individuals received SSA disability payments that they were not entitled to receive. Our analysis of federal civilian salary data and SSA disability data found that about 7,000 individuals at selected agencies had been wage-earning employees for the federal government while receiving SSA disability benefits during fiscal year 2008. The exact number of individuals who may be improperly or fraudulently receiving SSA disability payments cannot be determined without detailed case investigations. Our analysis of federal salary data from October 2006 through December 2008 found that about 1,500 federal employees’ records indicate that they may be improperly receiving payments. The individuals were identified using the following criteria: (1) DI beneficiaries who received more than 12 months of federal salary payments above the maximum SSA earnings threshold for the DI program (e.g., $940 per month for nonblind DI beneficiaries during calendar year 2008) after the start date of their disabilities or (2) SSI recipients who received more than 2 months of federal salary above the maximum SSA earnings threshold for the SSI program after the start date of their disabilities. Based on their SSA benefit amounts, we estimate that these approximately 1,500 federal employees received about $1.7 million of payments monthly. Table 1 summarizes the types of SSA disability benefits for these 1,500 federal employees who are receiving disability benefits. Figure 1 shows that 379 of the approximately 1,500 federal employees were U.S. Postal Service workers and 241 were DOD civilian employees. The remainder was other federal civilian employees. According to SSA officials, SSA currently does not obtain payroll records from the federal government to identify SSA disability beneficiaries or recipients who are currently working. SSA officials stated that they have not conducted a review to determine the feasibility of conducting such a match. However, SSA acknowledged that these payroll records may be helpful in more quickly identifying individuals who are working so that work CDRs could be performed to evaluate whether those individuals should have their disability payments suspended. Our analysis of data from DOT on commercial drivers and from SSA on disability beneficiaries found that about 600,000 individuals had been issued CDLs and were receiving full Social Security disability benefits. The actual number of SSA disability beneficiaries with active CDLs cannot be determined for two reasons. First, states maintain the current status of CDLs, not DOT. Second, possession of a CDL does not necessarily indicate that the individual returned to work. Because federal regulations require interstate commercial drivers to be examined and certified by a licensed medical examiner to be able to physically drive a commercial vehicle once every 2 years, we selected a nonrepresentative selection of 12 states to determine how many SSA disability beneficiaries had CDLs issued after their disabilities were determined by SSA. Of the 600,000 CDL holders receiving Social Security disability benefits, about 144,000 of these individuals were from our 12 selected states. As figure 2 shows, about 62,000 of these 144,000 individuals, or about 43 percent, had CDLs that were issued after SSA determined that the individuals met the federal requirements for full disability benefits. As a result, we consider the issuance of CDLs to be an indication that these individuals may no longer may no longer have serious medical conditions and may have returned to work. have serious medical conditions and may have returned to work. Our analysis of DOT data on commercial carriers found about 7,900 individuals who registered as transportation businesses and also received SSA disability benefits. The extent to which these business registrants are obtaining disability benefits fraudulently, improperly, or both is not known because each case must be investigated separately for such a determination to be reached. These companies may have gone out of business and not reported their closure to DOT, which would explain their registration. In addition, DI beneficiaries may have a passive interest in the business, which would not affect their eligibility for benefits. However, we believe that the registration of a business is an indicator that the individual could be actively engaged in the management of the company and gainfully employed, potentially disqualifying him or her from receiving either DI or SSI benefits. It also suggests that the individual’s assets may exceed the SSI maximum for eligibility. According to SSA officials, SSA currently does not obtain CDL or transportation businesses registrant records from DOT. SSA officials stated that these records do not have specific income records associated with them. Based on our overall analysis above, we nonrepresentatively selected 20 examples of federal employees, commercial drivers, and registrants of commercial vehicle companies who received disability payments fraudulently and/or improperly. As mentioned earlier, the 20 cases were primarily selected based on our analysis of SSA electronic and paper files for the higher overpayment amounts, the types of employment, and the locations of employment, and thus they cannot be projected to other federal employees, commercial drivers, or commercial vehicle owners who received SSA disability payments. In each case, SSA’s internal controls did not prevent improper and fraudulent payments, and as a result, tens of thousands of dollars of overpayments were made to individuals for 18 of these 20 cases. In fact, in one case, we estimate that SSA improperly paid an individual over $100,000 in disability benefits. For 10 of the 20 cases, SSA continued to pay these individuals their SSA disability benefits through October 2009 primarily because the agency had not yet identified their ineligibility for benefits. For the other cases, SSA has terminated the disability benefits and has negotiated repayment agreements for 2 of those cases. Our investigations found that five individuals committed fraud in obtaining SSA disability benefits because they knowingly withheld employment information from SSA. Fraud is “a knowing misrepresentation of the truth or concealment of a material fact to induce another to act to his or her detriment.” Although SSA instructions provided to beneficiaries require them to report their earnings to SSA in a timely manner to ensure that they remain eligible for benefits, several individuals knowingly did not notify SSA of their employment. Our investigations also found that 11 individuals potentially committed fraud because these individuals likely withheld required employment information from SSA. Most of these individuals claimed that they reported their employment information to SSA. However, according to SSA officials, for all 11 individuals, SSA did not have any tangible documentation in its files that these individuals actually reported their employment status to SSA. SSA officials stated that their workers are required to document all contacts in their files and that these purported contacts regarding employment notifications were likely never made. Finally, our investigations found four cases with no evidence of fraud but, rather, of administrative error. In these situations, the beneficiaries told our investigators that they reported their employment to SSA and SSA had evidence in its files that such contact did occur. Thus, we concluded that SSA made improper payments to these individuals because SSA was aware of the employment but continued to make disability payments to those individuals. During our investigations of the 20 cases, we also noted the following: SSA has an automated process, called Automated Earnings Reappraisal Operations (AERO), that screens changes in an individual’s earnings record and uses that information to compute changes in the monthly disability benefit payment. However, SSA currently does not use AERO to identify individuals who return to work and alert SSA staff to review these individuals’ records for possible suspension of disability payments. As a result, SSA increased the monthly disability benefits of several individuals based on the higher wages the individuals’ current employers reported to the agency but did not properly suspend the payments to those individuals. Four individuals received additional disability benefits because they had dependent children living with them. One individual was hired by a federal agency during the required waiting period prior to becoming eligible for benefits. This individual also improperly received additional government medical assistance (i.e., Medicare) based on the SSA disability determination. Certain individuals who claim that they are unable to immediately repay the disability benefits they improperly received can be put on long-term repayment plans that span years or decades. Although SSA has the authority to charge interest and penalties, SSA did not do so on these agreements. As a result, several individuals from our cases were placed in long-term, interest-free repayment plans for improperly accepting disability overpayments. For 1 of our 20 cases, SSA placed an individual on a repayment plan to repay approximately $33,000 in overpayments through $20 monthly installments. Based on this agreement, it will take over 130 years to repay this debt, exceeding the life expectancy for this individual. For 18 of these 20 cases, the individuals also received $250 stimulus checks as part of the Recovery Act while they were improperly receiving SSA disability payments. According to SSA officials, most of these individuals were entitled to and would receive the $250 stimulus checks even if SSA had properly suspended the disability payments to them. Specifically, SSA officials stated that beneficiaries covered under the DI program would have been covered under EPE, which is a 36-month period in which SSA does not pay any benefit amounts (i.e., payments are suspended) if the beneficiary has earnings above the maximum SSA SGA threshold. According to SSA officials, all working beneficiaries covered by EPE received the $250 stimulus check. The Recovery Act states that these stimulus benefit payments should be provided to individuals who are entitled to DI benefit payments or are eligible for SSI cash benefits. SSA stated that it did not seek a formal legal determination as to whether individuals who had their paym suspended because of employment should receive these stimulus payments. In total, SSA paid about $10.5 million in stimulus payments to However, approximately 42,000 individuals who were covered by EPE. we believe that a question exists as to whether these payments were proper and believe that SSA should have at least sought a formal legal opinion before making the payments. Table 2 highlights 10 of the 20 individuals we investigated. Table 3 in appendix I describes the other 10 individuals that we investigated. For 3 of these 20 cases, we videotaped the individuals who had improperly received disability benefits working at their federal government jobs. (See http://www.gao.gov/products/GAO-10-444.) In all 20 cases, we found that SSA had improperly paid the Social Security disability benefits. While it is important to encourage individuals with disabilities to return to work, SSA must also ensure that it has an effective system in place to maintain its program integrity. SSA has a stewardship responsibility to identify those individuals who have returned to work and are no longer eligible for benefits. Because of limited resources, SSA must effectively allocate its resources to identify such individuals. Federal payroll records and the AERO process are tools that SSA could utilize to timely initiate reviews and minimize improper and fraudulent payments. To enhance SSA’s ability to detect and prevent fraudulent and improper payments in its disability programs, we recommend that the Commissioner of Social Security take the following two actions to improve the agency’s processes: Evaluate the feasibility (including consideration of any costs and operational and system modifications) of incorporating the AERO process to identify individuals who have returned to work. Evaluate the feasibility of periodically matching SSA disability beneficiaries and recipients to federal payroll data. Such matches would provide SSA with more timely data to help SSA systematically and more effectively identify federal employees who are likely to incur overpayments. We provided a draft of this report to SSA and DOT for comment. DOT stated that it did not have comments on the report. SSA’s comments, along with our responses, are reprinted in appendix IV, and its technical comments were incorporated throughout the report as appropriate. SSA agreed with all our recommendations. SSA stated that it will evaluate the feasibility of using the AERO process. In addition, SSA stated that it will review the efficacy of matching federal salary payment records with SSA disability files of DI beneficiaries and SSI recipients. We encourage SSA to follow through on these recommendations. SSA also expressed concern that the overall message of our report is misleading and in some cases factually incorrect. We believe our report accurately describes the cases and our methodology. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Commissioner of Social Security, and the Secretary of Transportation. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. This appendix presents summary information on fraudulent and improper payments associated with 10 of our 20 case studies. Table 3 shows the remaining case studies that we audited and investigated. As with the 10 cases discussed in the body of this report, the Social Security Administration (SSA) did not prevent improper payment of Social Security disability benefits to these individuals. We referred all 20 cases to SSA management for collection action. The SSA Office of Inspector General has been informed of the 5 cases that we believe committed fraud. We also referred the case involving the SSA employee to the SSA Office of Inspector General for investigation. Our investigations detailed examples of 20 federal employees, commercial drivers, and owners of commercial vehicle companies who fraudulently and/or improperly received disability payments. For the 20 cases, our investigations found the following: For six cases, SSA eventually identified the disability overpayment and sent notification letters to the individuals indicating that they would have to repay the debts. For 10 cases, the individuals were continuing to receive disability benefits as of October 2009. For 14 cases, the individuals claimed to have notified SSA that they had returned to work or that it should terminate the disability benefits because they were no longer eligible because of employment income. However, for only 4 of these 14 cases did SSA have indications in its records that the individuals notified SSA of the return to work or requested termination of disability benefits. For 10 cases, SSA improperly increased the benefit amounts of the disability payments because the individuals had increases in the reported wages on which the disability benefit payments are based. For 18 cases, SSA sent the SSA beneficiaries and recipients the $250 economic stimulus check. For five cases, we believe that there is sufficient evidence that the beneficiaries committed fraud to obtain or continue receiving Social Security disability payments. For each of these five cases, we concluded that the individual withheld employment information from SSA to obtain or continue receiving disability payments. Table 4 provides these attributes for each selected case that we investigated. SSA’s failure to promptly prevent improper disability payments for the DI and SSI programs has, in part, contributed to overpayments in these programs. The overpayment of DI and SSI benefits may come from beneficiaries who had their benefits suspended or terminated following a work CDR. Overpayments may also be caused by other types of events, including receipt of workers compensation benefits, being in prison while receiving benefits, and medical improvement to the point where the individual no longer has disabilities. As shown in figure 3, in fiscal year 2004 the total net amount owed to SSA for DI and SSI overpayments was $7.6 billion. This debt has significantly increased through fiscal year 2008, as individuals owed over $10.7 billion in overpayments of DI and SSI benefits. The following are GAO’s comments on the Social Security Administration’s letter dated May 28, 2010. 1. In the report, we identify those cases where SSA has sent an overpayment notification letter to the individual. However, we do not believe that identifying fraudulent or improper payments after dollars have been disbursed is an effective internal control. Our work across the government has shown that once fraudulent or improper payments are made, the government is likely to only recover pennies on the dollar. Preventive controls are the most efficient and effective. 2. In the report, we state that to adequately assess an individual’s work status, a detailed evaluation of all the facts and circumstances should be conducted. This evaluation would include contacting the beneficiary and the beneficiary’s employer, obtaining corroborating evidence such as payroll data and other financial records, and evaluating the beneficiary’s daily activities. Based on this evaluation, a determination can be made on whether the individual is entitled to continue to receive SSA disability payments or whether such payments should be suspended. As such, our analysis provides an indicator of potentially improper or fraudulent activity related to federal employees, commercial drivers, and owners of commercial vehicle companies receiving SSA disability payments. 3. Our report described two cases of transportation drivers and owners who fraudulently and/or improperly received SSA disability payments. We do not believe that a change to the title is necessary. 4. We believe that SSA should perform the match with more current federal payroll records to determine the efficacy of matching federal salary payment records with SSA disability files of DI beneficiaries and SSI recipients. 5. We revised the report to address SSA’s specific comment. 6. IRS provides summary earnings data for a calendar year. We have previously reported that the IRS earnings data used by SSA in its enforcement operations are typically 12 to 18 months old when SSA first receives them, thus making some overpayments inevitable. The federal payroll data provide detailed earnings information for each pay period (e.g., all 26 pay periods for a fiscal year). We believe that these data are more useful in the determination of whether continuing disability reviews and redeterminations should be conducted and could be more current. 7. We believe the footnote is appropriate for this report. 8. As we stated in the report, SSA has the authority to charge interest and penalties, but SSA did not do so on any of its agreements with beneficiaries in our case studies. 9. The American Recovery and Reinvestment Act of 2009 states that these stimulus benefit payments should be provided to individuals who are entitled to DI benefit payments or are eligible for SSI cash benefits. SSA did not seek a formal legal determination as to whether individuals who had their payments suspended because of employment—and were thus not receiving DI or SSI payments during November and December of 2008 or January of 2009—should receive these stimulus payments. We continue to believe that a question exists as to whether these payments were proper and believe that SSA should have at least sought a legal opinion before making the payments. 10. IRS may well collect some of these stimulus benefits payments through a reduction of the “Making Work Pay” tax credit. We simply stated the magnitude of the stimulus payments made to those individuals covered under the extended period of eligibility. However, we believe that relying on the IRS offset is not an effective internal control activity. 11. Our estimated overpayment amount was based on our review of detailed payroll records and discussion with the SSA beneficiary. We believe that our estimated overpayment is accurate. 12. Our estimated overpayment amount was based on our review of detailed payroll records and discussion with the SSA beneficiary. Detailed payroll records showed that the beneficiary’s earnings were never below the substantial gainful activity threshold. As such, our estimated overpayment is about $25,000.
The Social Security Administration (SSA) administers two of the nation's largest cash benefits programs for people with disabilities: the Social Security Disability Insurance (DI) program, which provides benefits to workers with disabilities and their family members, and the Supplemental Security Income (SSI) program, which provides income for individuals with disabilities who have limited income and resources. In 2008, SSA provided about $142 billion in financial benefits for these two programs. As part of the American Recovery and Reinvestment Act of 2009, the federal government also paid $250 to each SSA recipient, such as DI beneficiaries, SSI recipients, and old-age retirement beneficiaries. GAO was asked to (1) determine whether federal employees and commercial drivers and company owners may be improperly receiving disability benefits and (2) develop case study examples of individuals who fraudulently and/or improperly receive these benefits. To do this, GAO compared DI and SSI benefit data to civilian payroll records of certain federal agencies and carrier/driver records from the Department of Transportation (DOT) and 12 selected states. GAO also interviewed SSA disability beneficiaries and recipients. GAO analysis of SSA and federal salary data found that there are indications that about 1,500 federal civilian employees may have improperly received benefits. In addition, GAO obtained data from 12 selected states and found that 62,000 individuals received or had renewed commercial driver's licenses after SSA determined that the individuals met the federal requirements for full disability benefits. Under DOT regulations, these individuals' eligibility must be medically certified every 2 years. Lastly, GAO found about 7,900 individuals with registered transportation businesses who were receiving SSA disability benefits. SSA regulations allow certain recipients to work and still receive their disability benefits. Thus, each case would require an investigation to determine whether there were fraudulent payments, improper payments, or both. The GAO analyses provide an indicator of potentially improper and fraudulent activity related to SSA benefits for federal employees, commercial drivers, and registrants of commercial vehicle companies. SSA currently does not perform a federal payroll or DOT records match to identify individuals improperly receiving benefits. GAO nonrepresentatively selected and investigated 20 examples of individuals who improperly and in some cases fraudulently received disability payments. For these 20 cases, SSA did not have the processes to effectively prevent improper and/or fraudulent payments. To see video clips of three individuals working at their federal jobs, see http://www.gao.gov/products/GAO-10-444 . GAO identified several issues arising from the investigations. For example, SSA continued to improperly pay individuals who informed SSA of their employment. Using a process called Automated Earnings Reappraisal Operations (AERO), SSA examined the earnings for several individuals and automatically increased these individuals' disability payments because of raises in salary from their federal employment. SSA officials stated that they currently do not use AERO to identify individuals who have returned to work. In addition, 18 individuals received $250 stimulus payments while they were improperly receiving SSA disability payments. GAO makes two recommendations for SSA to detect and prevent fraudulent and improper payments. SSA agreed with our recommendations, but disagreed with some facts presented.
streamline the flow of information integral to the operation of the health care system while protecting confidential health information from inappropriate access, disclosure, and use. HIPAA required the Secretary of HHS to submit recommendations to the Congress on privacy standards, addressing (1) the rights of the individual who is the subject of the information; (2) procedures for exercising such rights; and (3) authorized and required uses and disclosures of such information. HIPAA further directed that if legislation governing these privacy standards was not enacted within 3 years of the enactment of HIPAA—by August 21, 1999—the Secretary should issue regulations on the matter. HHS submitted recommendations to Congress on September 11, 1997, and when legislation was not enacted by the deadline, issued a draft regulation on November 3, 1999. After receiving over 52,000 comments on the proposed regulation, HHS issued a final regulation on December 28, 2000. Two key provisions in HIPAA defined the framework within which HHS developed the privacy regulation. HIPAA specifically applies the administrative simplification standards to health plans, health care clearinghouses (entities that facilitate the flow of information between providers and payers), and health care providers that maintain and transmit health information electronically. HHS lacks the authority under HIPAA to directly regulate the actions of other entities that have access to personal health information, such as pharmacy benefit management companies acting on behalf of managed care networks. HIPAA does not allow HHS to preempt state privacy laws that are more protective of health information privacy. Also, state laws concerning public health surveillance (such as monitoring the spread of infectious diseases) may not be preempted. HIPAA does not impose limits on the type of health care information to which federal privacy protection would apply. At the time the proposed regulation was issued, HHS sought to protect only health data that had been stored or transmitted electronically, but it asserted its legal authority to cover all personal health care data if it chose to do so. HHS adopted this position in the final regulation and extended privacy protection to personal health information in whatever forms it is stored or exchanged— electronic, written, or oral. The new regulation establishes a minimum level of privacy protection for individually identifiable health information that is applicable nationwide. When it takes full effect, patients will enjoy new privacy rights, and providers, plans, researchers, and others will have new responsibilities.Most groups have until February 26, 2003, to come into compliance with the new regulation, while small health plans were given an additional year. The regulation protecting personal health information provides patients with a common set of rights regarding access to and use of their medical records. For the first time, these rights will apply to all Americans, regardless of the state in which they live or work. Specifically, the regulation provides patients the following: Access to their medical records. Patients will be able to view and copy their information, request that their records be amended, and obtain a history of authorized disclosures. Restrictions on disclosure. Patients may request that restrictions be placed on the disclosure of their health information. (Providers may choose not to accept such requests.) Psychotherapy notes may not be used by, or disclosed to, others without explicit authorization. Education. Patients will receive a written notice of their providers’ and payers’ privacy procedures, including an explanation of patients’ rights and anticipated uses and disclosures of their health information. Remedies. Patients will be able to file a complaint with the HHS Office for Civil Rights (OCR) that a user of their personal health information has not complied with the privacy requirements. Violators will be subject to civil and criminal penalties established under HIPAA. Providers, health plans, and clearinghouses—referred to as covered entities—must meet new requirements and follow various procedures, as follows: Develop policies and procedures for protecting patient privacy. Among other requirements, a covered entity must designate a privacy official, train its employees on the entity’s privacy policies, and develop procedures to receive and address complaints. Obtain patients’ written consent or authorization. Providers directly treating patients must obtain written consent to use or disclose protected health information to carry out routine health care functions. Routine uses include nonemergency treatment, payment, and an entity’s own health care operations. In addition, providers, health plans, and clearinghouses must obtain separate written authorization from the patient to use or disclose information for nonroutine purposes, such as releasing information to lending institutions or life insurers. without explicit authorization from the individual. Furthermore, where staff administering the group health plan work in the same office as staff making hiring and promotion decisions, access to personal health information must be limited to those employees who perform health plan administrative functions. The regulation sets out special requirements for use of personal health information that apply to both federal and privately funded research: Researchers may use and disclose health information without authorization if it does not identify an individual. Information is presumed to be de-identified by removing or concealing all individually identifiable data, including name, addresses, phone numbers, Social Security numbers, health plan beneficiary numbers, dates indicative of age, and other unique identifiers specified in the regulation. Researchers who seek personal health information from covered entities will have two options. They can either obtain patient authorization or obtain a waiver from such authorization by having their research protocol reviewed and approved by an independent body—an institutional review board (IRB) or privacy board. In its review, the independent body must determine that the use of personal health information will not adversely affect the rights or welfare of the individuals involved, and that the benefit of the research is expected to outweigh the risks to the individuals’ privacy. HHS and others within the federal government will have a number of specific responsibilities to perform under the regulations. Although it no longer falls to the states to regulate the privacy of health information, states will still be able to enact more stringent laws. Federal and state public officials may obtain, without patient authorization, personal health information for public health surveillance; abuse, neglect, or domestic violence investigations; health care fraud investigations; and other oversight and law enforcement activities. HHS’ OCR has broad authority to administer the regulation and provide guidance on its implementation. It will decide when to investigate complaints that a covered entity is not complying and perform other enforcement functions directly related to the regulations. HIPAA gives HHS authority to impose civil monetary penalties ($100 per violation up to $25,000 per year) against covered entities for disclosures made in error. It may also make referrals for criminal penalties (for amounts of up to $250,000 and imprisonment for up to 10 years) against covered entities that knowingly and improperly disclose identifiable health information. Among the stakeholder groups we interviewed, there was consensus that HHS had effectively taken into account many of the views expressed during the comment period. Most organizations also agreed that the final regulation improved many provisions published in the proposed regulation. At the same time, many groups voiced concerns about the merit, clarity, and practicality of certain requirements. Overall, considerable uncertainty remains regarding the actions needed to comply with the new privacy requirements. Although the regulation, by definition, is prescriptive, it includes substantial flexibility. For example, in announcing the release of the regulation, HHS noted that “the regulation establishes the privacy safeguard standards that covered entities must meet, but it leaves detailed policies and procedures for meeting these standards to the discretion of each covered entity.” Among the stakeholder groups we interviewed, the topics of concern centered on conditions for consent, authorization, and disclosures; rules pertaining to the business associates of covered entities; limited preemption of state laws; the costs of implementation; and HHS’ capacity to provide technical assistance. in the first place. Another representative commented that public confidence in the protection of their medical information could be eroded as a result of the marketing provisions. One representative also concluded that allowing patients the opportunity to opt out in advance of all marketing contacts would better reflect the public’s chief concern in this area. HHS officials told us that this option exists under the provision granting patients the right to request restrictions on certain disclosures but that providers are not required to accept such patient requests. Several organizations questioned whether the scope of the consent provision was sufficient. For example, American Medical Association (AMA) representatives supported the requirement that providers obtain patient consent to disclose personal health information for all routine uses, but questioned why the requirement did not apply to health plans. Plans use identifiable patient information for quality assurance, quality improvement projects, utilization management, and a variety of other purposes. The association underscored its position that consent should be obtained before personal health information is used for any purpose and that the exclusion of health plans was a significant gap in the protection of this information. AMA suggested that health plans could obtain consent as part of their enrollment processes. The American Association of Health Plans (AAHP) also expressed concerns about the scope of consent, but from a different perspective. AAHP officials believe that the regulation may limit the ability of the plans to obtain the patient data necessary to conduct health care operations if providers’ patient consent agreements are drawn too narrowly to allow such data sharing. They suggested two ways to address this potential problem. First, if the health plans and network providers considered themselves an “organized health care arrangement,” access to the information plans needed could be covered in the consent providers obtained from their patients. Second, plans could include language in their contracts with physicians that would ensure access to patients’ medical record information. pharmacies could obtain written consent prior to treatment—that is, filling a prescription for the first time. The American Health Information Management Association (AHIMA) similarly noted the timing issue for hospitals with respect to getting background medical information from a patient prior to admission. HHS officials told us that they believe the regulation contains sufficient flexibility for providers to develop procedures necessary to address these and similar situations. Research organizations focused on the feasibility of requirements for researchers to obtain identifiable health information. The regulation requires them to obtain patient authorization unless an independent panel reviewing the research waives the authorization requirement. Although this approach is modeled after long-standing procedures that have applied to federally funded or regulated research, the regulation adds several privacy-specific criteria that an institutional review board or privacy board must consider. The Association of American Medical Colleges and the Academy for Health Services Research and Health Policy expressed specific concerns over the subjectivity involved in applying some of the additional criteria. As an example, they highlighted the requirement that an independent panel determine whether the privacy risks to individuals whose protected health information is to be used or disclosed are reasonable in relation to the value of the research involved. Several groups were concerned about the requirement for covered entities to establish a contractual arrangement with their business associates— accountants, attorneys, auditors, data processing firms, among others— that includes assurances for safeguarding the confidentiality of protected information. This arrangement was HHS’ approach to ensure that the regulation’s protections would be extended to information shared with others in the health care system. Some provider groups we spoke with were confused about the circumstances under which their member organizations would be considered covered entities or business associates. questioned the need for two covered entities sharing information to enter into a business associate contract. The regulation addresses one aspect of this concern. It exempts a provider from having to enter into a business associate contract when the only patient information to be shared is for treatment purposes. This exemption reflects the reasoning that neither entity fits the definition of business associate when they are performing services on behalf of the patient and not for one another. An example of such an exemption might include physicians writing prescriptions to be filled by pharmacists. Some groups also commented on the compliance challenges related to the business associate arrangement. For example, the representatives of the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) noted that it would need to enter into contracts for each of the 18,000 facilities (including hospitals, nursing homes, home health agencies, and behavioral health providers) that it surveys for accreditation. However, JCAHO officials hope to standardize agreements to some extent and are working on model language for several different provider types. They explained that, because assessing quality of care varies by setting, JCAHO would need more than one model contract. Most of the groups we interviewed cited as a key issue the HIPAA requirement that the privacy standards preempt some but not all state laws. Although every state has passed legislation to protect medical privacy, most of these laws regulate particular entities on specific medical conditions, such as prohibiting the disclosure of AIDS test results. However, a few states require more comprehensive protection of patient records. The patient advocacy groups we spoke with believe that partial preemption is critically important to prevent the federal rule from weakening existing privacy protections. According to the Health Privacy Project, the federal regulation will substantially enhance the confidentiality of personal health information in most states, while enabling states to enact more far-reaching privacy protection in the future. Despite the limited scope of most state legislation at present, other groups representing insurers and employers consider partial preemption to be operationally cumbersome and argue that the federal government should set a single, uniform standard. Organizations that operate in more than one state, such as large employers and health plans, contend that determining what mix of federal and state requirements applies to their operations in different geographic locations will be costly and complex. Although they currently have to comply with the existing mix of state medical privacy laws, they view the new federal provisions as an additional layer of regulation. A representative of AHIMA remarked that, in addition to state laws, organizations will have to continue to take account of related confidentiality provisions in other federal laws (for example, those pertaining to substance abuse programs) as they develop policies and procedures for notices and other administrative requirements. The final regulation withdrew a provision in the proposed regulation that would have required HHS to respond to requests for advisory opinions regarding state preemption issues. HHS officials concluded that the volume of requests for such opinions was likely to be so great as to overwhelm the Department’s capacity to provide technical assistance in other areas. However, they did not consider it unduly burdensome or unreasonable for entities covered by the regulation to perform this analysis regarding their particular situation, reasoning that any new federal regulation requires those affected by it to examine the interaction of the new regulation with existing state laws and federal requirements. Several groups in our review expressed concern about the potential costs of compliance with the regulation and took issue with HHS’ impact analysis. In that analysis, the Department estimated the covered entities’ cost to comply with the regulation to be $17.6 billion over the first 10 years of implementation. Previously, HHS estimated that implementation of the other administrative simplification standards would save $29.9 billion over 10 years, more than offsetting the expenditures associated with the privacy regulation. HHS therefore contends that the regulation complies with the HIPAA requirement that the administrative simplification standards reduce health care system costs. HHS expects compliance with two provisions—restricting disclosures to the minimum information necessary and establishing a privacy official—to be the most expensive components of the privacy regulation, in both the short and the long term. Table 1 shows HHS’ estimates of the costs to covered entities of complying with the privacy regulation. Health Privacy: Regulation Enhances Protection of Patient Records but Raises Practical Concerns (Millions of Dollars) We did not independently assess the potential cost of implementing the privacy regulation, nor had the groups we interviewed. However, on the basis of issues raised about the regulation, several groups anticipate that the costs associated with compliance will exceed HHS’ estimates. For example, BCBSA representatives contended that its training costs are likely to be substantial, noting that its member plans encompass employees in a wide range of positions who will require specialized training courses. AHA cited concerns about potentially significant new costs associated with developing new contracts under the business associate provision. Other provider groups anticipated spending additional time with patients to explain the new requirements and obtain consent, noting that these activities will compete with time for direct patient care. Several groups, including AHA, AAMC, and AHIMA, expressed concerns about being able to implement the regulation within the 2-year time frame. model forms, policies, and procedures for implementing the regulation. AMA expects to provide guidance to physicians and help with forms and notices on a national level, and noted that the state medical associations are likely to be involved in the ongoing analysis of each state’s laws that will be required. Representatives of some organizations we contacted commented that they were unsure how the Department’s OCR will assist entities with the regulation’s implementation. They anticipate that the office, with its relatively small staff, will experience difficulty handling the large volume of questions related to such a complex regulation. OCR officials informed us that the office will require additional resources to carry out its responsibilities and that it is developing a strategic plan that will specify both its short- and its long-term efforts related to the regulation. To carry out its implementation responsibilities, HHS requested and received an additional $3.3 million in supplemental funding above its fiscal year 2001 budget of approximately $25 million. According to OCR, this amount is being used to increase its staff of 237 to support two key functions: educating the public and those entities covered by the rule about the requirements and responding to related questions. OCR officials told us that its efforts to date include presentations to about 20 organizations whose members are affected by the regulation, a hotline for questions, and plans for public forums. OCR officials said the office had received about 400 questions since the regulation was issued. Most of these inquiries were general questions relating to how copies of the regulation can be obtained, when it goes into effect, and whether it covers a particular entity. Other questions addressed topics such as the language and format to use for consent forms, how to identify organized health care arrangements, whether the regulation applies to deceased patients, and how a patient’s identity should be protected in a physician’s waiting room. According to OCR officials, technical questions that cannot be answered by OCR staff are referred to appropriate experts within HHS. by stakeholder groups reflects the recent issuance of the regulation. With time, everyone will have greater opportunity to examine its provisions in detail and assess their implications for the ongoing operations of all those affected. In addition, on a more fundamental level, the uncertainty stems from HHS’ approach of allowing entities flexibility in complying with its requirements. Although organizations generally applaud this approach, they acknowledge that greater specificity would likely allay some of their compliance concerns. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I will be happy to answer any questions you may have. For future contacts regarding this testimony, please call Leslie G. Aronovitz, Director, Health Care—Program Administration and Integrity Issues, at (312) 220-7600. Other individuals who made contributions to this statement include Hannah Fein, Jennifer Grover, Joel Hamilton, Rosamond Katz, Eric Peterson, Daniel Schwimer, and Craig Winslow. (290019)
Advances in information technology, along with an increasing number of parties with access to identifiable health information, have created new challenges to maintaining the privacy of medical records. Patients and providers alike have expressed concern that broad access to medical records by insurers, employers, and others may result in inappropriate use of the information. Congress sought to protect the privacy of individuals' medical information as part of the Health Insurance Portability and Accountability Act of 1996 (HIPAA). HIPAA included a timetable for developing comprehensive privacy standards that would establish rights for patients with respect to their medical records and define the conditions for using and disclosing identifiable health information. The final privacy regulation offers all Americans the opportunity to know and, to some extent, control how physicians, hospitals, and health plans use their personal information. At the same time, these entities will face a complex set of privacy requirements that are not well understood at this time. Some of the uncertainty expressed by stakeholder groups reflects the recent issuance of the regulation. With time, everyone will have greater opportunity to examine its provisions and assess their implications for the ongoing operations of everyone affected. In addition, on a more fundamental level, the uncertainty stems from HHS' approach of allowing entities flexibility in complying with its requirements. Although organizations generally applaud this approach, they acknowledge that greater specificity would likely allay some of their compliance concerns.
The results of our undercover tests illustrate flaws in WHD’s responses to wage theft complaints, including delays in investigating complaints, complaints not recorded in the WHD database, failure to use all available enforcement tools because of a lack of resources, failure to follow up on employers who agreed to pay, and a poor complaint intake process. For example, WHD failed to investigate a child labor complaint alleging that underage children were operating hazardous machinery and working during school hours. In another case, a WHD investigator lied to our undercover investigator about confirming the fictitious businesses’ sales volume with the Internal Revenue Service (IRS), and did not investigate our complaint any further. WHD successfully investigated 1 of our 10 fictitious cases, correctly identifying and investigating a business that had multiple complaints filed against it by our fictitious complainants. Five of our 10 complaints were not recorded in WHD’s database and 2 of 10 were recorded as successfully paid when in fact the fictitious complainants reported to WHD they had not been paid. To hear selected audio clips of these undercover calls, go to http://www.gao.gov/media/video/gao-09-458t/. Table 1 provides a summary of the 10 complaints that we filed or attempted to file with WHD. We identified numerous problems with the WHD response to our undercover wage theft complaints. Key areas where WHD failed to take appropriate action include delays in investigating complaints, complaints not recorded in the WHD database, failure to use available enforcement tools, failure to follow up on employers who agreed to pay, and a poor complaint intake process. Delays Investigating Complaints. WHD took more than a month to begin investigating five of our fictitious complaints, including three that were never investigated. In one case, the fictitious complainant spoke to an investigator who said she would contact the employer. During the next 4 months, the complainant left four messages asking about the status of his case. When he reached the investigator, she had taken no action on the complaint, did not recall speaking with him and had not entered the complaint in the WHD database. Complaints Not Recorded in Database. Five of our complaints were never recorded in WHD’s database. These complaints were filed with four different field offices and included three complaints in which WHD performed no investigative work and two complaints in which WHD failed to record the investigative work performed. For example, we left a message at one WHD office alleging that underage children were working at a meat packing plant during school hours and operating heavy machinery, such as meat grinders and circular saws. With respect to complaints, WHD policy states that those involving hazardous conditions and child labor are its top priority, but a review of WHD records at the end of our work showed that the case was not investigated or entered into WHD’s database. In another case, an investigator spoke to the fictitious employer, who refused to pay the complainant the back wages due. The investigator closed the conciliation without entering the case information or outcome into WHD’s database. This is consistent with the WHD Southeast regional policy of not recording the investigative work performed on unsuccessful conciliations. The effect of not recording unsuccessful conciliations is to make the conciliation success rate for the regional office appear better than it actually is. The number of complaints that are not entered into WHD’s database is unknown, but this problem is potentially significant since 5 out of our 10 bogus complaints were not recorded in the database. Failure to Use All Enforcement Tools. According to WHD staff, WHD lacks the resources to use all enforcement tools in conciliations where the employer refuses to pay. According to WHD policy, when an employer refuses to pay, the investigator may recommend to WHD management that the case be elevated to a full investigation. However, only one of our three fictitious employers who refused to pay was placed under investigation. In one case, our fictitious employer refused to pay and the investigator accepted this refusal without question, informing the complainant that he could file a private lawsuit to recover the $262 due to him. When the complainant asked why WHD couldn’t provide him more assistance, the investigator replied, “I’ve done what I can do, I’ve asked her to pay you and she can’t…I can’t wring blood from a stone,” and then suggested the complainant contact his Congressman to ask for more resources for WHD to do their work. According to WHD policy and interviews with staff, WHD doesn’t have the resources to conduct an investigation of every complaint and prefers to investigate complaints affecting large numbers of employees or resulting in large dollar amounts of back wages. One district director told us that conciliations result from “a mistake” on the part of the employer and he does not like his investigators spending time on them. However, when WHD cannot obtain back wages in a conciliation and decides not to pursue an investigation, the employee’s only recourse is to file private litigation. Low wage workers may be unable to afford attorney’s fees or may be unwilling to argue their own case in small claims court, leaving them with no other options to obtain their back wages. Failure to Follow Up on Employers Who Agree to Pay. In 2 of our cases, the fictitious employer agreed to pay the back wages due and WHD recorded the conciliation as successful, even when the complainant notified the investigator that he had not been paid. In both cases, the investigator told the employer he was required to submit proof of payment, but only one of the investigators followed up when the employer failed to provide the required proof. The complainant in both cases later contacted the investigator to report he had not been paid. The investigator attempted to negotiate with both fictitious employers, but did not update the case entry in WHD’s database to indicate that the complainant never received back wages, making it appear as though both cases were successfully resolved. These two cases cast doubt on whether complainants whose conciliations are marked “agreed to pay” in the WHD database actually received their back wages. Poor Complaint Intake Process. We found that WHD’s complaint intake process is time-consuming and confusing, potentially discouraging complainants from filing a complaint. Of the 115 phone calls we made directly to WHD field offices, 87 (76 percent) went directly to voicemail. While some offices have a policy of screening complainant calls using voicemail, other offices have staff who answer the phone, but may not able to respond to all incoming calls. In one case, WHD failed to respond to seven messages from our fictitious complainant, including four messages left in a single week. In other cases, WHD delayed over 2 weeks in responding to phone calls or failed to return phone calls from one of our fictitious employers. At least two WHD offices have no voice mailbox for the office’s main phone number, preventing complainants from leaving a message when the office is closed or investigators are unavailable to take calls. One of our complainants received conflicting information about how to file a complaint from two investigators in the same office, and one investigator provided misinformation about the statute of limitations in minimum wage cases. At one office, investigators told our fictitious employee that they only accept complaints in writing by mail or fax, a requirement that delays the start of a case and is potentially discouraging to complainants. In addition, an investigator lied about contacting IRS to determine the annual sales for our fictitious employer, and then told our complainant that his employer was not covered by the FLSA. FLSA applies to employees of enterprises that have at least $500,000 in annual sales or business. Our complainant in this case told the investigator that his employer had sales of $1.5 million in 2007, but the investigator claimed that he had obtained information about the business from an IRS database showing that the fictitious business did not meet the gross revenue threshold for coverage under federal law. Our fictitious business had not filed tax returns and WHD officials told us that their investigators do not have access to IRS databases. A review of the case file also shows that no information from the IRS was reviewed by the investigator. Information related to this case was referred to Labor’s Office of the Inspector General for further investigation. WHD successfully investigated a business that had multiple complaints filed against it by our fictitious complainants. WHD identified two separate conciliations ongoing against the same fictitious business, both originating from complaints filed by our fictitious complainants. These conciliations were combined into an investigation, the correct procedure for handling complaints affecting multiple employees. The investigator continued the investigation after the fictitious employer claimed that the business had filed for bankruptcy and attempted to visit the business when the employer stopped returning phone calls. The investigator did not use public records to verify that the employer had filed for bankruptcy, but otherwise made reasonable efforts to locate and investigate the business. Similar to our 10 fictitious scenarios, we identified 20 cases affecting at least 1,160 workers whose employers were inadequately investigated by WHD. We performed data mining on the WHISARD database to identify 20 inadequate cases closed during fiscal year 2007. For several of these cases, WHD (1) did not respond to a complainant for over a year, (2) did not verify information provided by the employer, (3) did not fully investigate businesses with repeat violations, and (4) dropped cases because the employer did not return telephone calls. Ten of these case studies are presented in appendix II. Table 2 provides a summary of 10 case studies closed by WHD between October 1, 2006 and September 31, 2007. Case Study 1: Two garment factory workers filed complaints alleging that their former employer did not pay minimum wage and overtime to its workers. In early August 2006, an employee of the company informed WHD that the company was forcing employees to sign a document stating that they had been paid in compliance with the law before they could receive their paychecks. One of the complainants also confirmed to the WHD investigator that the employer was distributing this document. The next day, an investigator traveled to the establishment to conduct surveillance. The investigator took pictures of the establishment and did not speak with anyone from the company. No additional investigative work was done on this case until almost 2 months later when another investigator visited the establishment and found that the company had vacated the premises. A realty broker at the site informed the investigator that he did not believe the firm had relocated. As a result, WHD closed the investigation. Using publicly available information, we found that the business was active as of January 2009 and located at a different address approximately 3 miles away from its old location. We contacted the factory and spoke with an employee, who told us that the business had moved from the address WHD visited. Case Study 4: In July 2007, WHD received a complaint from a former corrections officer who alleged that a county Sheriff’s office did not pay $766 in minimum wage. The WHD investigator assigned to work on this case made two calls to the Sheriff’s office over a period of 2 days. Two days after the second call, WHD dropped this case because no one from the employer had returned the calls. WHD did not make additional efforts to contact the employer or validate the allegations. WHD informed the complainant that private litigation could be filed in order to recover back wages. We successfully contacted the Sheriff’s office in November 2008. Case Study 5: In May 2007, a non-profit community worker center contacted WHD on behalf of a day laborer alleging that his employer owed him $1,500 for the previous three pay periods. WHD contacted the employer, who stated that the complainant was actually an employee of a subcontractor, but refused to provide the name of the subcontractor. WHD closed the case without verifying the employer’s statements and informed the community worker center of the employee’s right to file private litigation. WHD’s case file indicates that no violations were found and the employer was in compliance with applicable labor laws. According to the Executive Director of the worker center, approximately 2 weeks later, WHD contacted him and claimed that the employer in the complaint had agreed to pay the back wages. When the employer did not pay, the complainant sued the employer in small claims court. During the course of the lawsuit the employer admitted that he owed the employee back wages. The court ruled that the employer owed the employee $1,500 for unpaid wages, the same amount in the original complaint to WHD. Case Study 8: In November 2005, WHD’s Salt Lake City District Office received a complaint alleging that a boarding school in Montana was not paying its employees proper overtime. Over 9 months after the complaint was received, the case was assigned to an investigator and conducted as an over the phone self-audit. According to the investigator assigned to the case, WHD was unable to conduct a full investigation because the boarding school was located over 600 miles from Salt Lake City and WHD did not have the resources to conduct an on-site investigation. The employer’s self-audit found that 93 employees were due over $200,000 in overtime back wages for hours worked between September 2004 and June 2005. WHD determined that the firm began paying overtime correctly in June 2006 based on statements made by the employer, but did not verify the statements through document review. After the employer’s attorney initially indicated that they would agree to pay the over $200,000 in back wages, WHD was unable to make contact with the business for over 5 months. WHD records indicate that the investigator believed that the firm was trying to find a loop hole to avoid paying back wages. In June 2007, one week before the 2-year statute of limitations on the entire back wage amount was to expire, the employer agreed to pay $1,000 out of the $10,800 that had not yet expired. The investigator refused to accept the $1,000 saying that it would have been “like settling the case.” WHD recorded the back wages computed as over $10,800 rather than $200,000, greatly understating the true amount owed to employees. WHD noted in the case file that the firm refused to pay the more than $10,800 in back wages, but did not recommend assessing penalties because they felt the firm was not a repeat offender and there were no child labor violations. No further investigative action was taken and the complainant was informed of the outcome of the case. Case Study 10: In June 2003 and early 2005, WHD received complaints against two restaurants owned by the same enterprise. One complaint alleged that employees were working “off the clock” and servers were being forced to give 2.25 percent of their tips to the employer. The other complaint alleged off the clock work, illegal deductions, and minimum wage violations. This case was not assigned to an investigator until May 2005, over 22 months after the 2003 complaint was received. The WHD investigator assigned to this case stated that the delay in the case assignment was because of a backlog at the Nashville District Office that has since been resolved. WHD conducted a full investigation and found that 438 employees were due approximately $230,000 in back wages for minimum wage and overtime violations and the required tip pool. Although tip pools are not illegal, WHD determined that the employer’s tip pool was illegal because the company deposited the money into its business account. Further, the firm violated child labor laws by allowing a minor under 16 years old to work more than 3 hours on school days. The employer disagreed that the tip pool was illegal and stated that a previous WHD investigator had told him that it was acceptable. The employer agreed to pay back wages due for the minimum wage and overtime violations, but not the wages that were collected for the tip pool. WHD informed the employer that partial back wages would not be accepted and this case was closed. Information on 10 additional case studies can be found in appendix II. WHD’s complaint intake processes, conciliations, and other investigative tools are ineffective and often prevent WHD from responding to wage theft complaints in a timely and thorough manner, leaving thousands of low wage workers vulnerable to wage theft. Specifically, we found that WHD often fails to record complaints in its database and its poor complaint- intake process potentially discourages employees from filing complaints. For example, 5 of our 10 undercover wage theft complaints submitted to WHD were never recorded in the database, including a complaint alleging that underage children were operating hazardous machinery during school hours. WHD’s conciliation process is ineffective because in many cases, if the employer does not immediately agree to pay, WHD does not investigate complaints further or compel payment. In addition, WHD’s poor record-keeping makes WHD appear better at resolving conciliations than it actually is. For example, WHD’s southeast region, which handled 57 percent of conciliations recorded by the agency in fiscal year 2007, has a policy of not recording unsuccessful conciliations in the WHD database. Finally, we found WHD’s processes for handling investigations and other non-conciliations were frequently ineffective because of significant delays. Once complaints were recorded in WHD’s database and assigned as a case to an investigator, they were often adequately investigated. WHD’s complaint intake process is seriously flawed, with both customer service and record-keeping issues. With respect to customer service, wage theft victims may file complaints with WHD in writing, over the phone, or in person. However, our undercover tests showed that wage theft victims can be discouraged to the extent that WHD never even accepts their complaints. We found that in their efforts to screen complaints some WHD staff actually deter callers from filing a complaint by encouraging employees to resolve the issue themselves, directing most calls to voicemail, not returning phone calls to both employees and employers, accepting only written complaints at some offices, and providing conflicting or misleading information about how to file a complaint. For example, the pre-recorded voice message at one office gives callers information on the laws WHD enforces, but when the message ends there are 23 seconds of silence before the call is directed to the voice message system that allows callers to file complaints, creating the impression that the phone call has been disconnected. WHD requires an investigator to speak with the employee before an investigation can be initiated, but a real low wage worker may not have the time to make multiple phone calls to WHD just to file a complaint and may give up when call after call is directed to voicemail and not returned. It is impossible to know how many complainants attempt to file a complaint but are discouraged by WHD’s complaint intake process and eventually give up. Regarding WHD’s record-keeping failures, we found that WHD does not have a consistent process for documenting and tracking complaints. This has resulted in situations where WHD investigators lose track of the complaints they have received. According to WHD policies, investigators should enter complaints into WHD’s database and either handle them immediately as conciliations or refer them to management for possible investigation. However, several of our undercover complaints were not recorded in the database, even after the employee had spoken to an investigator or filed a written complaint. This is particularly troubling in the case of our child labor complaint, because it raises the possibility that WHD is not recording or investigating complaints concerning the well- being and safety of the most vulnerable employees. Employees may believe that WHD is investigating their case, when in fact the information they provided over the phone or even in writing was never recorded. Since there is no record of these cases in WHD’s database, it is impossible to know how many complaints are reported but never investigated. According to several WHD District Directors, in conciliations where the employer refuses to pay, their offices lack the resources to investigate further or compel payment, contributing to the failures we identified in our undercover tests, case studies, and statistical sample. When an employer refuses to pay, investigators may recommend that the case be elevated to a full investigation, but several WHD District Directors and field staff told us WHD lacks the resources to conduct an investigation of every complaint and focuses resources on investigating complaints affecting large numbers of employees or resulting in large dollar amounts of back wage collections. Conducting a full investigation allows WHD to identify other violations or other affected employees, attempt to negotiate back wage payment with the employer and, if the employer continues to refuse, refer the case to the Solicitor’s Office for litigation. However, in some conciliations, the employer is able to avoid paying back wages simply by refusing. While WHD informs complainants of their right to file a lawsuit against their employers to recover back wages, it is unlikely that most low wage workers have the means to hire an attorney, leaving them with little recourse to obtain their back wages. WHD’s conciliation policy also limits the actions staff may take to resolve these cases. For example, WHD staff told us that complaints handled as conciliations must be completed in under 15 days from the time the complaint is assigned to an investigator, and at least one office allows investigators only 10 days to resolve conciliations, which may not allow time for additional follow-up work to be performed. WHD staff in one field office told us they are limited to three unanswered telephone calls to the employer before they are required to drop the case and advise the complainant of his right to file a lawsuit to recover back wages. Staff in several field offices told us that they are not permitted to make site visits to employers for conciliations. WHD investigators are allowed to drop conciliations when the employer denies the allegations and WHD policy does not require that investigators review employer records in conciliations. In one case study, the employee stated that he thought the business was going bankrupt. WHD dropped the case stating that the employer declared bankruptcy and informed the employee of his right to file a private lawsuit to recover back wages. Bankruptcy court records show that the employer had not filed for bankruptcy, and we confirmed that the employer was still in business in December 2008. One WHD investigator told us that it is not necessary to verify bankruptcy records because conciliations are dropped when the employer refuses to pay, regardless of the reason for the refusal. Our undercover tests and interviews with field staff also identified serious record-keeping flaws in which make WHD appear better at resolving conciliations than it actually is. For example, WHD’s southeast region, which handled 57 percent of conciliations recorded by WHD in fiscal year 2007, has a policy of not recording investigative work performed on unsuccessful conciliations in the database. WHD staff told us that if employers do not agree to pay back wages, cannot be located, or do not answer the telephone, the conciliation work performed will not be recorded in the database, making it appear as though these offices are able to resolve nearly all conciliations successfully. Inflated conciliation success rates are problematic for WHD management, which uses this information to determine the effectiveness of WHD’s investigative efforts. Our undercover tests and interviews with WHD staff also raise questions about the reliability of conciliation information recorded in WHD’s database. As illustrated by our undercover tests, when an employer initially agrees to pay in a conciliation but reneges on his promise, WHD investigators did not change the outcome of the closed case in WHISARD to show that the employee did not receive back wages. While some investigators wait for proof of payment before closing the conciliation, others told us that they close conciliations as soon as the employer agrees to pay. Even if the employee later tells the investigator that he has not been paid, investigators told us they do not change the outcome of a closed case in the WHD database. WHD publicly reports on the total back wages collected and the number of employees receiving back wages, but these statistics are overstated because an unknown number of conciliations recorded as successfully resolved in the WHD database did not actually result in the complainant receiving the back wages due. These poor record-keeping practices represent a significant limitation of the population we used to select our statistical sample because the number of conciliations actually performed by WHD cannot be determined and conciliations recorded as successfully resolved may not have resulted in back wages for the employees. As a result, the percentage of inadequate conciliations is likely higher than the failure rate estimated in our sample. We found that 5.2 percent of conciliations in our sample were inadequately conciliated because WHD failed to verify the employer’s claim that no violation occurred, closed the case after the employer did not return phone calls, or closed the case after the employer refused to pay back wages. However, we found that many of the conciliations recorded in WHD’s database were adequately investigated. One example of a successful conciliation involved a complaint alleging that a firm was not paying minimum wage. The complaint was assigned to an investigator the same day it was filed in September 2007. The WHD investigator contacted the owner, who admitted the violation and agreed to pay back wages of $1,500. The case was concluded the same day when the investigator obtained a copy of the complainant’s check from the employer and spoke to the complainant, confirming that he was able to cash the check and had received his back wages. We found WHD’s process for handling investigations and other non- conciliations was frequently ineffective because of significant delays. However, once complaints were recorded in WHD’s database and assigned as a case to an investigator, they were often successfully investigated. Almost 19 percent of non-conciliations in our sample were inadequately investigated, including cases that were not initiated until more than 6 months after the complaint was received, cases closed after an employer refused to pay, and cases that took over one year to complete. In addition, seven cases failed two of our tests. Six of the cases in our sample failed because they were not initiated until over 6 months after the complaint was received. According to WHD officials, non-conciliations should be initiated within 6 months of the date the complaint is filed. Timely completion of investigations by WHD is important because the statute of limitations for recovery of wages under the FLSA is 2 years from the date of the employer’s failure to pay the correct wages. Specifically, this means that every day that WHD delays an investigation, the complainant’s risk of becoming ineligible to collect back wages increases. In one of our sample cases, WHD sent a letter to a complainant 6 months after his overtime complaint was filed stating that, because of a backlog, no action had been taken on his behalf. The letter requested that the complainant inform WHD within 2 business days of whether he intended to take private action. The case file shows no indication that the complainant responded to WHD. One month later, WHD assigned the complaint to an investigator and sent the complainant another letter stating that if he did not respond within 9 business days, the case would be closed. WHD closed the case on the same day the letter was sent. Our case studies discussed above and in appendix II also include examples of complaints not investigated for over a year, cases closed based on unverified information provided by the employer, businesses with repeat violations that were not fully investigated, and cases dropped because the employer did not return telephone calls. For example, in one case study, WHD found that 21 employees were due at least $66,000 in back wages for overtime violations. Throughout the investigation, the employer was uncooperative and resisted providing payroll records to WHD. At the end of the investigation, the firm agreed with WHD’s findings and promised to pay back wages, but then stopped responding to WHD. The employees were never paid back wages and over a year later, the Solicitor’s Office decided not to pursue litigation or any other action in part because the case was considered “significantly old.” The failures we identified resulted, in part, from the large backlog of cases in several WHD offices, investigators’ failure to compel cooperation from employers, and a lack of certain tools that would facilitate verification of employer statements. In several district offices, a large backlog prevents investigators from initiating cases within 6 months. One office we visited has a backlog of 7 to 8 months, while another office has a backlog of 13 months. Additionally, our analysis of WHD’s database shows that one district office did not initiate an investigation of 12 percent of complaints until over one year after the complaint was received, including a child labor complaint affecting over 50 minors. Because the statue of limitations to collect back wages under FLSA is 2 years, WHD is placing complainants at risk of collecting only a fraction of the back wages they would have been able to collect at the time of the complaint. WHD also failed to compel records and other information from employers. While WHD Regional Administrators are legally able to issue subpoenas, WHD has not extended this ability to individual investigators, who therefore depend on employers to provide records and other documentation voluntarily. In cases where public records are available to verify employer statements, WHD investigators do not have certain tools that would facilitate access to these documents. For example, we used a publicly-available online database, Public Access to Court Electronic Records (PACER), to determine that an employer who claimed to have filed for bankruptcy had not actually done so. However, there is no evidence in the case file that the WHD investigator performed this check. WHD officials told us that its investigators do not receive training on how to use public document searches and do not have access to databases containing this information such as PACER. We found that, once complaints were recorded in WHD’s database and assigned as a case to an investigator in a timely manner, they were often successfully investigated. As discussed above, WHD does not record all complaints in its database and discourages employees from filing complaints, some of which may be significant labor violations suitable for investigation. In addition, many cases are delayed months before WHD initiates an investigation. However, our sample identified many cases that were adequately investigated once they were assigned to an investigator. Specifically, 81.2 percent of the non-conciliations in our sample were adequately investigated. One example of a successful investigation involved a complaint alleging that a firm was not paying proper overtime was assigned to an investigator the same day it was filed in April 2007. The WHD investigator reviewed payroll records to determine that the firm owed the complainant back wages. The case was concluded within 3 months when the investigator obtained a copy of the complainant’s cashed check, proving that he had been paid his gross back wages of $184. This investigation clearly shows that the Department of Labor has left thousands of actual victims of wage theft who sought federal government assistance with nowhere to turn. Our work has shown that when WHD adequately investigates and follows through on cases they are often successful; however, far too often many of America’s most vulnerable workers find themselves dealing with an agency concerned about resource limitations, with ineffective processes, and without certain tools necessary to perform timely and effective investigations of wage theft complaints. Unfortunately, far too often the result is unscrupulous employers taking advantage of our country’s low wage workers. Mr. Chairman and Members of the Committee, this concludes our statement. We would be pleased to answer any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-6722 or kutzg@gao.gov or Jonathan Meyer at (214) 777- 5766 or meyerj@gao.gov. Individuals making key contributions to this testimony included Erika Axelson, Christopher Backley, Carl Barden, Shafee Carnegie, Randall Cole, Merton Hill, Jennifer Huffman, Barbara Lewis, Jeffery McDermott, Andrew McIntosh, Sandra Moore, Andrew O’Connell, Gloria Proa, Robert Rodgers, Ramon Rodriguez, Sidney Schwartz, Kira Self, and Daniel Silva. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. To review the effectiveness of WHD’s complaint intake and conciliation processes, GAO investigators attempted to file 11 complaints about 10 fictitious businesses to WHD district offices in Baltimore, Maryland; Birmingham, Alabama; Dallas, Texas; Miami, Florida; San Jose, California; and West Covina, California. These field offices handle 13 percent of all cases investigated by WHD. The complaints we filed with WHD included minimum wage, last paycheck, overtime, and child labor violations. GAO investigators obtained undercover addresses and phone numbers to pose as both complainants and employers in these scenarios. As part of our overall assessment of the effectiveness of investigations conducted by WHD, we obtained and analyzed WHD’s Wage and Hour Investigative Support and Reporting Database (WHISARD), which contained 32,323 cases concluded between October 1, 2006 and September 30, 2007. We analyzed WHD’s WHISARD database and determined it was sufficiently reliable for purposes of our audit and investigative work. We analyzed a random probability sample of 115 conciliations and 115 non- conciliations to contribute to our overall assessment of whether WHD’s processes for investigating complaints are effective. Because we followed a probability procedure based on random selections, our samples are only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of the particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 5 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. To determine whether an investigation was inadequate, we reviewed case files and confirmed details of selected cases with the investigator or technician assigned to the case. In our sample tests, conciliations were determined to be inadequate if WHD did not successfully initiate investigative work within 3 months or did not complete investigative work within 6 months. Non-conciliations were determined to be inadequate if WHD did not successfully initiate investigative work within 6 months, did not complete investigative work within 1 year or did not refer cases in which the employer refused to pay to Labor’s Office of the Solicitor. Both conciliations and non-conciliations were determined to be inadequate if WHD did not contact the employer, did not correctly determine coverage under federal law, did not review employer records, or did not compute and assess back wages when appropriate. We gathered additional information about WHD policies and procedures by reviewing training materials and the WHD Field Operations Handbook, conducting walk-throughs of investigative processes with management and interviewing WHD officials. We gathered information about district office policies and individual cases by conducting site visits at the Miami and Tampa, Florida district offices, and conducting telephone interviews with technicians, investigators and district directors in 23 field offices and headquarters officials in Washington, D.C. We also spoke with Labor’s Office of the Solicitor in Dallas, Texas and Washington, D.C. To identify macro-level data on WHD complaints, we analyzed data for cases closed between October 1, 2006 and September 30, 2007 by region, district office and case outcome. To identify case studies of inadequate WHD responses to complaints, we data-mined WHISARD to identify closed cases in which a significant delay occurred in responding to a complaint (cases taking more than 6 months to initiate or 1 year to complete), an employer could not be located, or the case was dropped when an employer refused to pay. We obtained and analyzed WHD case files, interviewed WHD officials, and reviewed publicly available data from online databases and the Department of Treasury’s Financial Crimes Enforcement Network to gather additional information about these cases. We also interviewed complainants who contacted GAO directly or were referred to us by labor advocacy groups to gather information about WHD’s investigation of their complaints. Table 5 provides a summary of ten additional case studies of inadequate Wage and Hour Division (WHD) investigations. These case studies include instances where WHD dropped cases after (1) employers refused to cooperate with an investigation, (2) WHD identified a violation but failed to force employers to pay employees their owed wages, and (3) an employer alleged it was bankrupt when in fact the employer was not.
The mission of the Department of Labor's Wage and Hour Division (WHD) includes enforcing provisions of the Fair Labor Standards Act, which is designed to ensure that millions of workers are paid the federal minimum wage and overtime. Conducting investigations based on worker complaints is WHD's priority. According to WHD, investigations range from comprehensive investigations to conciliations, which consist primarily of phone calls to a complainant's employer. In July 2008, GAO testified on 15 case studies where WHD failed to investigate complaints. This testimony highlights the findings of a follow-up investigation performed at the Committee's request. Specifically, GAO was asked to (1) test WHD's complaint intake process in an undercover capacity, (2) provide additional case study examples of inadequate WHD responses to complaints, and (3) assess the effectiveness of WHD's complaint intake process, conciliations, and other investigative tools. To test WHD's complaint intake process, GAO posed as complainants and employers in 10 different scenarios. To provide case study examples and assess effectiveness of investigations, GAO used data mining and statistical sampling of closed case data for fiscal year 2007. GAO plans to issue a follow-up report with recommendations concerning resource needs and the recording of complaints. GAO also confirmed key findings with WHD officials. GAO found that WHD frequently responded inadequately to complaints, leaving low wage workers vulnerable to wage theft. Posing as fictitious complainants, GAO filed 10 common complaints with WHD district offices across the country. The undercover tests revealed sluggish response times, a poor complaint intake process, and failed conciliation attempts, among other problems. In one case, a WHD investigator lied about investigative work performed and did not investigate GAO's fictitious complaint. At the end of the undercover tests, GAO was still waiting for WHD to begin investigating three cases--a delay of nearly 5, 4, and 2 months, respectively. Similar to the 10 fictitious scenarios, GAO identified 20 cases affecting at least 1,160 real employees whose employers were inadequately investigated. For example, GAO found cases where it took over a year for WHD to respond to a complaint, cases closed based on unverified information provided by the employer, and cases dropped when the employer did not return phone calls. GAO's overall assessment of the WHD complaint intake, conciliation, and investigation processes found an ineffective system that discourages wage theft complaints. With respect to conciliations, GAO found that WHD does not fully investigate these types of complaints or compel employers to pay. In addition, a WHD policy instructed many offices not to record unsuccessful conciliations in its database, making WHD appear better at resolving conciliations than it actually is. WHD's investigations were frequently delayed by months or years, but once complaints were recorded in WHD's database and assigned as a case to an investigator, they were often adequately investigated.
OPS regulates the safety of almost 2.2 million miles of pipelines, which is enough to circle the earth 88 times. There are three primary types of pipelines under OPS’ jurisdiction. Natural gas transmission pipelines— about 322,000 miles—transport natural gas over long distances from sources to communities. An additional 1.7 million miles of natural gas distribution pipelines continue transporting the gas throughout the communities to consumers. Finally, about 155,000 miles of hazardous liquid pipelines generally transport crude oil to refineries and continue to transport the refined oil product, such as gasoline, to product terminals and airports. These pipelines transport the bulk of natural gas and petroleum products in the United States and are the safest mode for transporting these potentially dangerous commodities. Although pipeline incidents resulted in an average of about 24 fatalities per year from 1989 to 2000, the number of pipeline incidents is relatively low when compared with those involving other forms of freight transportation. On average, about 66 people die each year in barge accidents, about 590 in railroad accidents, and about 5,100 in truck accidents. Despite the relative safety of pipelines, pipeline incidents can have tragic consequences, as evidenced by the incidents at Bellingham, WA, and Carlsbad, NM. These incidents, which caused 15 fatalities, highlighted the importance of pipeline safety and the need for more effective oversight by OPS. From 1989 through 2000, the total number of incidents per 10,000 miles of pipeline decreased by 2.9 percent annually, while the number of major pipeline incidents (those resulting in a fatality, an injury, or property damage of $50,000 or more) per 10,000 miles of pipeline increased by 2.2 percent annually. (See fig. 1.) Over the same time period, pipeline mileage increased 1.6 percent annually from 1.9 to 2.2 million miles of pipelines. Traditionally, OPS carried out its oversight responsibility by requiring all pipeline operators to comply with uniform, minimum standards. Recognizing that pipeline operators face different risks depending on such factors as location and the product they carry, OPS began exploring the concept of a risk-based approach to pipeline safety in the mid-1990s. In 1996, the Accountable Pipeline Safety and Partnership Act directed OPS to establish a demonstration program to test a risk-based approach. The Risk Management Demonstration Program went beyond OPS’ traditional regulatory approach by allowing individual companies to identify and focus on risks to their pipelines. Since the program’s initiation in 1997, OPS has approved six demonstration projects. Partly on the basis of OPS’ experience with the Risk Management Demonstration Program, the agency has moved forward with a new regulatory approach that requires pipeline operators to comprehensively identify and address risks to the segments of their pipelines that are located in “high consequence areas” where a leak or rupture would have the greatest impact. This approach requires individual pipeline operators to develop and follow an integrity management program. Each program must contain specific elements, including a baseline assessment of all pipelines that could affect high consequence areas, periodic reassessment of these pipeline segments, prompt action to address any problems identified in the assessments, and measures of the program’s effectiveness. Although OPS has issued final rules requiring integrity management programs for operators of hazardous liquid pipelines, the agency has not issued a proposed rule for operators of gas transmission pipelines. In December 2000, OPS issued a final rule for operators of “large” hazardous liquid pipelines, defined as pipeline systems of at least 500 miles. Under this rule, individual operators were required by December 31, 2001 to identify pipeline segments that can affect high consequence areas, and then develop a framework for their integrity management program and a plan for conducting baseline assessments by March 31, 2002. OPS issued a similar rule for operators of “small” hazardous liquid pipelines that are less than 500 miles long on January 16, 2002, with later deadlines. For natural gas transmission pipelines, OPS anticipates issuing a final rule in fall 2002. OPS plans to review and monitor operators’ programs for compliance with the integrity management requirements, but will not formally approve operator programs. OPS is currently in the first of a four-phase plan for reviewing and monitoring integrity management programs for operators of large hazardous liquid pipelines. In phase 1—scheduled to be completed by the end of April 2002—OPS is reviewing operators’ identification of pipeline segments that impact high consequence areas. During phase 2— from July 2002 to July 2004—OPS will inspect the more fully developed framework and assessment plans. After July 2004, OPS plans to monitor operators’ implementation of their individual programs through periodic inspections in phase 3, and review and respond to notifications from operators of changes in their programs in phase 4. OPS is hiring and training additional inspectors to review and monitor operators’ programs. OPS had 56 inspectors in fiscal year 2001 and plans to hire an additional 30 inspectors—a 54-percent increase—by the end of fiscal year 2003. OPS plans to augment its inspection force with contractor and state support as it develops the necessary expertise to review and monitor operators’ programs. OPS has also developed a list of training courses that will be required for federal and state inspectors, and it is currently scheduling this training. OPS officials anticipate that it will take about 2 years to provide this training to all federal and state inspectors. In addition to the integrity management programs, OPS is making progress on other initiatives for improving data, involving states, and increasing the use of fines. These initiatives are intended to improve pipeline safety and the agency’s oversight. DOT’s Inspector General, the National Transportation Safety Board, and others have reported that OPS’ data on pipeline incidents and infrastructure are limited and sometimes inaccurate. For example, in the past, OPS’ incident report forms have used only five categories of causes for incidents on natural gas distribution pipelines, four categories for those on natural gas transmission pipelines, and seven categories for those on hazardous liquid pipelines. As a result, about one-fourth of all pipeline incidents were attributed to “other causes,” which limited OPS’ ability to identify and focus on the causes of incidents. In addition, data on the amount of pipeline mileage in various infrastructure categories (such as age or size) are necessary for a meaningful comparison of the safety performance of individual pipeline companies. OPS did not require hazardous liquid pipeline operators to submit this type of data and did not collect complete data from natural gas pipelines. Finally, the information on incident reports filed by operators sometimes changes as the incident investigation proceeds. OPS did not have a procedure for ensuring that operators submitted revised reports when needed. OPS is taking action to collect data that will allow it to more accurately determine the causes of incidents, analyze industry trends, and compare the safety performance of operators. For example, OPS revised its incident report forms in 2001 for hazardous liquid and natural gas transmission incidents to include 25 categories of causes and plans to revise the form for natural gas distribution incidents by the end of 2002. Furthermore, OPS is assigning an inspector in each region to review incident report forms for completeness and accuracy, and has instituted new electronic notification procedures to ensure that operators submit revised incident reports, if necessary. OPS also plans to institute annual reports for hazardous liquid pipeline operators, and is in the process of revising annual report forms for all natural gas pipeline operators. Finally, OPS is conducting studies of incident information to improve its understanding of the causes of incidents. According to OPS officials, most of these improvements will be implemented for 2002 data. According to the Safety Board and industry groups, OPS’ initiatives address the underlying data problems and will enable OPS to better understand the causes of incidents so the agency can focus its efforts to improve safety. However, officials from industry groups told us that it will be several years before OPS has sufficient data to analyze trends in incidents. Officials from the Safety Board also noted that these initiatives are merely a first step, and they emphasized that OPS should periodically reassess its forms and procedures and take steps to revise them as necessary. We are evaluating OPS’ data improvement initiatives as part of our ongoing work. OPS is allowing more states to help oversee a broader range of interstate pipeline safety activities. Although OPS relies heavily on state inspectors to oversee intrastate pipelines, it reduced its reliance on states to inspect interstate pipelines in the mid-1990s when it moved to a more risk-based, system-wide approach to inspecting pipelines. At that time, OPS believed it would be too difficult to coordinate participation by individual states in the new inspection process. However, in our May 2000 report, we found that allowing states to participate in interstate pipeline safety inspections could improve pipeline safety by increasing the frequency and thoroughness of inspections to detect safety problems. Additionally, state pipeline safety inspectors are likely to be familiar with pipelines in their jurisdictions and the potential risks faced by these pipelines. We recommended that OPS work with state pipeline safety officials to determine which activities would benefit from state participation and, for states that are willing to participate, integrate their activities into the safety program. We also recommended that OPS allow state inspectors to assist in reviewing the integrity management programs developed by the companies that operate in their states to help ensure that these companies have identified and adequately addressed safety risks to their systems. OPS responded to our recommendations in 2001 by encouraging more states to oversee the safety of interstate pipelines in their states. These states may perform a broad range of oversight activities, such as inspections of new construction, oversight of rehabilitation projects and integrity management programs, incident investigation, standard inspections, and participation in nonregulatory program initiatives. Other states that want to participate on a smaller scale may apply for specific, short-term projects, such as inspecting new pipeline construction projects. As of January 2002, 11 states—up from 8 in 2000—have been approved to participate in all oversight activities, and an additional 4 states have been approved to participate on short-term projects. OPS is increasing its use of fines for safety violations, thereby reversing a trend of relying more heavily on less severe corrective actions. From 1990 to 1998, OPS decreased the proportion of enforcement actions in which it proposed fines from about 49 percent to about 4 percent. During this time, the agency increased the proportion of warning letters and letters of concern from about 33 percent to about 68 percent. OPS made this change in order to place more emphasis on “partnering” to improve pipeline safety rather than on punishing noncompliance. As of May 2000, OPS could not determine whether this approach was effective in maintaining compliance with safety regulations. Consequently, we recommended that DOT determine whether OPS’ reduced use of fines had maintained, improved, or decreased compliance with pipeline safety regulations. According to OPS officials, the agency is not able to determine the impact of its compliance actions on safety as we recommended because it does not have sufficient data. Nevertheless, OPS concluded that its decreased reliance on fines was perceived negatively by the public and Congress, and that the letters of concern did not allow OPS to adequately address safety concerns. OPS subsequently changed its enforcement policy to make better use of its full range of enforcement tools, including increasing the number and severity of fines. According to OPS officials, the agency plans to collect data that will allow it to link its compliance actions with improvements in safety. We will follow up on OPS’ progress in this area during our current review. OPS is taking action on open recommendations from the Safety Board and statutory requirements, but has still not implemented important recommendations and requirements. In May 2000, we reported that OPS historically had the worst response rate—about 69 percent—of any transportation agency to Safety Board recommendations. These recommendations dealt with a variety of issues that are critical for pipeline safety, such as requiring operators to periodically inspect pipelines and install valves to shut down the pipeline in an emergency. Some of these recommendations were more than a decade old. OPS has been working to improve its responsiveness over the last several years by initiating activities in response to the recommendations and improving communications with the Safety Board. The Safety Board has been encouraged by OPS’ efforts to improve its responsiveness, particularly in the areas of excavation damage, corrosion control, and data quality. However, the Safety Board remains concerned about the amount of time OPS has been taking to implement recommendations. As of February 2002, OPS had not implemented 42 recommendations, several of which date from the late 1980s and deal with issues considered critical to pipeline safety, such as requiring operators to inspect their pipelines. OPS maintains that its progress is better than the Safety Board indicates. According to OPS officials, the majority of the recommendations deal with integrity management and excavation damage prevention, which the agency’s ongoing initiatives should fulfill before the end of 2002. We also reported in May 2000 that OPS had not implemented 22 out of 49 statutory requirements that were designed to improve pipeline safety. Similar to the open Safety Board recommendations, several of these unfulfilled requirements dated from the late 1980s and early 1990s and were related to important pipeline safety issues, such as internal inspections and identification of pipelines in populated or environmentally sensitive areas. Since May 2000, OPS has been working to complete these requirements. As of February 2002, 8 of the 22 requirements were closed as a result of OPS’ actions, 9 requirements were still open, and the remaining 5 were reclassified as “closed” because OPS considered them to be superseded by amendments or other requirements or because the agency did not believe it was required to take further action. OPS plans to fulfill the majority of the open requirements before the end of 2002. In our ongoing work, we are examining several issues that could affect OPS’ ability to implement its integrity management and data improvement initiatives and, ultimately, fulfill the Safety Board’s recommendations and statutory requirements. These issues include (1) performance measures for the integrity management approach, (2) sufficient resources and expertise to oversee operators’ integrity management programs, (3) consistent and effective enforcement of integrity management program requirements, and (4) requirements for integrity management programs for operators of gas transmission pipelines. Performance measures: In May 2000, we reported that OPS had not developed programwide performance measures for the Risk Management Demonstration Program, even though the act required such measures to demonstrate the safety benefits of the program. OPS still has not developed such measures. Despite the lack of quantifiable performance measures for the demonstration program, OPS moved forward with integrity management programs and faces the challenge of developing performance measures for this new approach to regulating pipeline safety. Such measures are essential to determine whether the new approach is successful and what improvements may be needed. However, OPS does not have a complete and viable database of information on pipeline incidents and an inventory of pipeline infrastructure on which to establish certain performance measures. OPS has taken steps to improve its data, but it may be several years before the agency can accumulate sufficient data to evaluate trends in the pipeline industry. Resources and expertise: Pipeline operators are in the best position to develop integrity management programs that are tailored to their pipelines; however, it is critical for OPS to have adequate resources and expertise to oversee the programs. After OPS issues a final rule on integrity management programs for natural gas transmission pipelines, the agency estimates that there will be more than 400 hazardous liquid and natural gas pipeline operators with individual programs in various stages of development. OPS must ensure that it has a sufficient number of inspectors to oversee these programs while maintaining its other oversight responsibilities. Moreover, while OPS has resolved to include states in reviewing and monitoring operators’ programs, the agency faces a challenge to determine how best to leverage federal and state resources and provide training to state inspectors. Furthermore, OPS’ integrity management initiative represents a fundamental shift in how it oversees the pipeline industry. Federal and state inspectors that are accustomed to using a checklist approach for inspecting pipelines for compliance with uniform regulations will have to be trained to evaluate programs that are unique to individual operators. For example, under the new requirements, operators may use a variety of inspection techniques to assess the safety of their pipelines. Inspectors must be familiar with all of these inspection techniques, know when it is appropriate to use them, and know how to interpret the results. Enforcement: The variability of individual operator programs will make it difficult for OPS to enforce the requirements of the integrity management program. OPS’ integrity management requirements for hazardous liquid pipelines allow pipeline operators flexibility to design and implement integrity management programs based on pipeline-specific conditions and risks. However, this flexibility will result in unique programs for each operator and require more judgment on the part of inspectors. To ensure that the program requirements are consistently and effectively enforced, OPS is developing a comprehensive set of inspection protocols that are intended to provide clear criteria to inspector staff for evaluating the adequacy of operator actions and making enforcement decisions. As noted previously, OPS believes its staff will need increased training and expertise to make these types of judgments.
The Office of Pipeline Safety (OPS) oversees 2.2 million miles of pipelines that transport potentially dangerous materials, such as oil and natural gas. OPS has been slow to improve its oversight of the pipeline industry and implement critical pipeline safety improvements. As a result, OPS has the lowest rate of any transportation agency for implementing the recommendations of the National Transportation Safety Board. In recent years, OPS has taken several steps to improve its oversight of the pipeline industry, including requiring "integrity management" programs for individual operators to assess their pipelines for risks, take action to mitigate the risks, and develop program performance measures. OPS has also (1) revised forms and procedures to collect more complete and accurate data, which will enable OPS to better assess the causes of incidents and focus on the greatest risks to pipelines; (2) allowed more states to oversee a broader range of interstate pipeline safety activities; and (3) increased the use of fines. OPS has made progress in responding to recommendations from the Safety Board and statutory requirements, but some key open recommendations and requirements, such as requiring pipeline operators to periodically inspect their pipelines, are now more than a decade old. OPS faces challenges that include (1) developing performance measures for the integrity management approach, (2) ensuring sufficient resources and expertise to oversee operators' integrity management programs, (3) providing consistent and effective enforcement of integrity management program requirements, and (4) issuing requirements for integrity management programs for operators of gas transmission pipelines.
During the 1990s, the demand for and supply of illegal drugs have persisted at very high levels and have continued to adversely affect American society in terms of social, economic, and health costs and drug-related violent crime. During the same period, funding for federal drug control efforts overall and for the Drug Enforcement Administration (DEA), which is dedicated to controlling the supply of illegal drugs, increased significantly. According to the Office of National Drug Control Policy (ONDCP), drug use and its consequences threaten Americans of every socioeconomic background, geographic region, educational level, and ethnic or racial identity. Drug abuse and trafficking adversely affect families, businesses, and neighborhoods; impede education; and choke criminal justice, health, and social service systems. A report prepared for ONDCP showed that drug users in the United States spent an estimated $57 billion for illegal drugs in 1995. Other costs to society include lost jobs and productivity, health problems, and economic hardships to families. ONDCP, in its 1999 National Drug Control Strategy, noted that illegal drugs cost our society approximately $110 billion each year. On the basis of the National Household Survey on Drug Abuse, the Substance Abuse and Mental Health Services Administration (SAMHSA) estimated that in 1997 there were 13.9 million current users of illegal drugs in the United States aged 12 and older, representing 6.4 percent of the total population. As figure 1.1 shows, this number has fluctuated somewhat but has remained fairly constant overall since 1990, as have the numbers of current users of cocaine and marijuana, with 1.5 million cocaine users and 11.1 million marijuana users in 1997. As shown in figure 1.2, current drug use among youth rose significantly from 1992 to 1996. The trend then improved, with drug use declining for 8th and 10th graders in 1997 and 1998. Abuse of illegal drugs has serious consequences. For example, SAMHSA’s Drug Abuse Warning Network (DAWN) reported 9,310 drug-related deaths in 1996, an increase of 65 percent from the 5,628 deaths reported in 1990. The number of drug-related hospital emergency room visits reported to DAWN rose 42 percent from 1990 to 1997. There were 371,208 emergency room episodes in 1990 and 527,058 episodes in 1997. According to DEA and ONDCP, illegal drugs, including cocaine, heroin, marijuana, and methamphetamine, have inflicted serious damage and continued to threaten our nation during the 1990s. National and international drug trafficking organizations continued to bring these drugs into the United States, and certain illegal drugs are clandestinely produced in this country. Drug trafficking gangs and individuals dealing in drugs, as well as drug users, have caused violence in local communities. DEA considers cocaine to be the primary drug threat to the U.S. population. Cocaine use has remained at a relatively constant high level during the 1990s, as indicated by the National Household Survey on Drug Abuse. The National Narcotics Intelligence Consumers Committee reported that the use of “crack,” a potent and highly addictive form of cocaine that first became widely available in the 1980s, also remained at a high level in the 1990s. ONDCP reported in the summer of 1998 that crack was failing to attract new users, although established users persisted in using it. Regarding cocaine trafficking trends, DEA intelligence information shows that Colombian trafficking organizations, although more fragmented than in the past, continue to control the worldwide supply of cocaine. However, Mexican organizations have played an increasing role in the U.S. cocaine trade in the 1990s. The Southwest Border is now the primary entry point for cocaine smuggled into the United States. Heroin is readily available in major cities in the United States, and its use is on the rise in many areas around the country, according to DEA. ONDCP has noted that the increasing availability of high-purity heroin has made snorting and smoking more common modes of ingestion than injection, thereby lowering inhibitions to heroin use. DEA intelligence information indicates that the heroin available in the United States comes from Southeast Asia (principally Burma); Southwest Asia/Middle East (Afghanistan, Lebanon, Pakistan, and Turkey); Mexico; and South America (Colombia). Although Southeast Asian heroin dominated the U.S. market in the 1980s and into the 1990s, Colombian heroin emerged as a significant problem in the mid-1990s. In 1997, 75 percent of the heroin seized and analyzed in the United States was Colombian. DEA reported that independent Colombian drug traffickers established themselves in the U.S. heroin market by distributing high- quality heroin (frequently above 90-percent pure), undercutting the price of their competition, and using long-standing drug distribution networks. According to DEA, marijuana is the most readily available and commonly used illegal drug in the country. Further, a resurgence of marijuana trafficking and use has taken place in urban centers across the United States. ONDCP noted that this market is driven by a high level of demand, with users from virtually all age groups, demographic groups, and income levels. According to DEA intelligence information, most of the foreign marijuana available here is smuggled into the country across the Southwest Border. Mexican drug trafficking organizations are responsible for supplying most of the foreign marijuana, whether grown in Mexico or shipped through Mexico from other locations such as Colombia. Marijuana is also grown domestically in remote outdoor locations in the United States, including on public lands, and indoors. In the 1990s, major outdoor marijuana growths have been found in California, Florida, Hawaii, Kentucky, New York, Tennessee, and Washington. DEA uses the term “dangerous drugs” to refer to a broad category of controlled substances other than cocaine, opiates such as heroin, and cannabis products such as marijuana. The list of dangerous drugs includes drugs that are illegally produced; drugs legally produced but diverted to illicit use (e.g., pharmacy thefts, forged prescriptions, and illegal sales); as well as legally produced drugs obtained from legitimate channels (e.g., legally and properly prescribed). Some of the dangerous drugs are methamphetamine; lysergic acid diethylamide (LSD); phencyclidine (PCP); diazepam (Valium); and flunitrazepam (Rohypnol), commonly called the “date rape” drug. DEA reports that methamphetamine use has increased in the 1990s, resulting in a devastating impact on many communities across the nation. A powerful stimulant, methamphetamine is the most prevalent synthetic controlled substance clandestinely manufactured in the United States. Historically, methamphetamine has more commonly been used in the western United States, but its use has been spreading to other areas of the country. According to DEA, methamphetamine suppliers have traditionally been motorcycle gangs and other independent groups. However, organized crime groups operating in California, some with ties to major Mexico- based trafficking organizations, now dominate wholesale-level methamphetamine production and distribution in the United States. Mexican trafficking organizations use their well-established cocaine, heroin, and marijuana distribution networks to smuggle methamphetamine throughout the country. Although large-scale production of methamphetamine is centered in California, it is increasingly being produced in Mexico and smuggled into the United States. Trafficking organizations have continued to supply domestic drug consumers despite short-term achievements by both federal and foreign law enforcement agencies in apprehending individuals and disrupting the flow of illegal drugs. When confronted with threats to their operations, drug trafficking organizations have become adept at quickly changing their modes of operation. For example, as we previously reported, when law enforcement agencies have successfully carried out efforts to intercept drugs being smuggled by aircraft, traffickers have increased their use of maritime and overland transportation routes. In another example, DEA reported that a 1989 drug enforcement operation, which involved the seizure of nearly 40 metric tons of cocaine, led to a new arrangement between Mexican transportation organizations and Colombian cocaine organizations. To reduce the complex logistics and vulnerabilities associated with large cash transactions, Mexican organizations started receiving part of the cocaine shipments they smuggled for the Colombians in exchange for their transportation services. By the mid-1990s, Mexican organizations were receiving up to one-half of a cocaine shipment as payment. This arrangement radically changed the role and sphere of influence of the Mexican organizations in the U.S. cocaine trade. By relinquishing part of each cocaine shipment, the Colombian organizations ceded a share of the U.S. cocaine market to the Mexican traffickers. In addition, although overall violent crime has steadily declined during the 1990s, many of the violent crimes committed are drug-related, according to ONDCP. There are no overall quantitative data on drug-related violent crime and the relationship between drug abuse or trafficking and violent crime, but ONDCP has identified several qualitative indicators linking drug abuse or trafficking and other crimes, including violent crimes. According to ONDCP, many crimes (e.g., murder, assault, and robbery) are committed under the influence of drugs or may be motivated by a need for money to buy drugs. In addition, drug trafficking and violence often go hand in hand. Competition and disputes among drug dealers can cause violence, as can the location of drug markets in disadvantaged areas where legal and social controls against violence tend to be ineffective. In this regard, DEA reported in 1996 that violent drug gangs, which were once largely confined to major cities, had migrated to and/or emerged in rural areas and small cities throughout the country. One example cited was Vidalia, GA, where a violent crack cocaine gang was linked to numerous homicides and drive-by shootings. Nevertheless, the Department of Justice (DOJ) reported that overall violent crime in the United States in 1997 had fallen more than 21 percent since 1993 and had reached its lowest level in at least 24 years. Similarly, the Federal Bureau of Investigation (FBI) reported in its 1997 Uniform Crime Reports that the murder rate in 1997 had declined 28 percent since 1993. It also reported that the number of drug-related murders decreased by 7 percent between 1996 and 1997. According to ONDCP, from fiscal years 1990 through 1999 the federal government spent about $143.5 billion, in constant 1999 dollars, on four functional areas that can be divided between two categories—(1) those that are aimed at reducing the demand for illegal drugs and (2) those that are aimed at reducing the availability or supply of such drugs in the United States. As figure 1.3 indicates, about 33.8 percent of the total funds were used for drug demand reduction. About 66.2 percent of the total funds were used for the three functional areas intended to reduce the drug supply, with the largest share—49.5 percent—dedicated to domestic law enforcement programs, 3.8 percent to international programs, and 12.9 percent to interdiction programs. As figure 1.4 indicates, total funds for federal drug control activities increased, in constant 1999 dollars, by about 49 percent—from about $12 billion to almost $18 billion—between fiscal years 1990 and 1999. However, funding trends varied for the four functional areas. Although funds for the drug demand reduction functional area generally increased steadily overall by 50 percent from about $3.9 billion in 1990 to about $5.8 billion in 1999, funding trends for the three drug supply reduction functional areas were mixed. Funds for domestic law enforcement programs increased steadily overall by about 66 percent from about $5.4 billion in 1990 to almost $8.9 billion in 1999. Funds for interdiction programs fluctuated within the time period, increasing overall by about 9 percent from about $2.2 billion in 1990 to almost $2.4 billion in 1999. Funds for international programs increased by 23 percent from 1990 to 1992, to a peak of $759.1 million; they then decreased by 60 percent to a low of $303.5 million in 1996; they rose by 163 percent to about $796.9 million in 1999. The Anti-Drug Abuse Act of 1988 (P.L. 100-690), as amended, established ONDCP to set federal priorities for drug control, implement a National Drug Control Strategy, and certify federal drug control budgets. The act specifies that the National Strategy must be comprehensive and research based; contain long-range goals and measurable objectives; and seek to reduce drug use (demand), availability (supply), and related consequences. ONDCP has produced annual strategic plans since 1989. These strategies recognized that no single approach could solve the nation’s drug problem; rather, drug prevention, education, and treatment must be complemented by drug supply reduction actions abroad, on our borders, and within the United States. Each strategy also shared a commitment to maintain and enforce antidrug laws. In 1998, ONDCP’s National Drug Control Strategy established performance targets to reduce illegal drug use and availability in the United States by 25 percent by the year 2002 and 50 percent by 2007. The strategy focuses on reducing the demand for drugs through treatment and prevention and attacking the supply of drugs through domestic law enforcement, interdiction efforts, and international cooperation. ONDCP’s 1999 National Strategy includes the performance targets and 5 goals along with 31 supporting objectives intended to serve as the basis for a coherent, long- term national effort. Goal 1: Educate and enable America’s youth to reject illegal drugs as well as tobacco and alcohol. Goal 2: Increase the safety of America’s citizens by substantially reducing drug-related crime and violence. Goal 3: Reduce the health and social costs to the public of illegal drug use. Goal 4: Shield America’s air, land, and sea frontiers from the drug threat. Goal 5: Break foreign and domestic sources of supply. As discussed in detail in chapter 3, strategic goals 2, 4, and 5 address drug supply reduction and involve drug law enforcement activities, including those for which DEA is responsible. Goal 2 seeks, among other things, to reduce the rate of drug-related crime and violence in the United States by 15 percent by the year 2002 and achieve a 30-percent reduction by the year 2007. Goal 4 seeks a 10-percent reduction in the rate at which illegal drugs successfully enter the United States by the year 2002 and a 20-percent reduction in this rate by 2007. Goal 5 seeks a 15-percent reduction in the flow of illegal drugs from source countries by the year 2002 and a 30- percent reduction by 2007. The goal also seeks a 20-percent reduction in domestic marijuana cultivation and methamphetamine production by 2002 and a 50-percent reduction by 2007. The mission of DEA, which is a component of DOJ, is to (1) enforce the drug laws and regulations of the United States and bring drug traffickersto justice and (2) recommend and support nonenforcement programs aimed at reducing the availability of illegal drugs in domestic and international markets. DEA is the lead agency responsible for federal drug law enforcement and for coordinating and pursuing drug investigations in foreign countries. According to DEA, its primary responsibilities for drug law enforcement include the following: investigating major drug traffickers operating at interstate and international levels and criminals and drug gangs who perpetrate violence in local communities; coordinating and cooperating with federal, state, and local law enforcement agencies on mutual drug enforcement efforts, including interstate and international investigations; managing a national drug intelligence system in cooperation with other federal, state, local, and foreign agencies to collect, analyze, and disseminate strategic and operational drug intelligence information; seizing and forfeiting drug traffickers’ assets; coordinating and cooperating with federal, state, and local law enforcement agencies and foreign governments on programs designed to reduce the availability of illegal drugs on the U.S. market through nonenforcement methods, such as crop eradication, crop substitution, and the training of foreign officials; and operating, under the policy guidance of the Secretary of State and U.S. Ambassadors, all programs associated with drug law enforcement counterparts in foreign countries. To carry out its mission and responsibilities, DEA, along with its headquarters office, had 21 domestic field divisions throughout the United States and its territories, including Puerto Rico, as of December 1998. Subordinate to these divisions, each of which was headed by a Special Agent in Charge (SAC), were a total of 30 district offices, 115 resident offices, and 46 posts of duty in the United States, with at least 1 office in every state. Overseas, DEA had 79 offices in 56 foreign countries. This included 56 country offices, each headed by a country attaché (CA), and 23 resident offices reporting to the country offices. (App. I contains profiles of the five field divisions and three country offices included in our review.) In addition, DEA manages a multiagency intelligence center in El Paso, TX; conducts training at Quantico, VA; and maintains seven drug analytical laboratories in various regions of the country and a special drug testing facility in McLean, VA. As shown in figure 1.5, DEA’s budget almost doubled, in constant 1999 dollars, from fiscal year 1990 to fiscal year 1999 and totaled about $11.3 billion during that period. Commensurate with its increased funding, as shown in figure 1.6, the size of DEA’s staff also increased during the 1990s by 40 percent—from 5,995 employees in 1990 to 8,387 employees in 1998. During this period, the number of intelligence specialists increased by about 116 percent, special agents by 40 percent, and other positions by 32 percent. As shown in figure 1.7, of the total on-board positions in fiscal year 1998, special agents made up about 51.4 percent, other positions about 41.8 percent, and intelligence specialists about 6.9 percent. The Chairmen of the House Judiciary Subcommittee on Crime and the Senate Caucus on International Narcotics Control requested that we determine (1) what major enforcement strategies, programs, initiatives, and approaches DEA has implemented in the 1990s to carry out its mission, including its efforts to (a) target and investigate national and international drug traffickers and (b) help state and local law enforcement agencies combat drug offenders and drug-related violence in their communities; (2) whether DEA’s strategic goals and objectives, programs and initiatives, and performance measures are consistent with the National Drug Control Strategy; and (3) how DEA determined its fiscal year 1998 staffing needs and allocated the additional staff. We did our review at DEA headquarters, as well as at DEA offices in five domestic field divisions and three foreign countries. We also obtained information from officials representing DOJ, ONDCP, the Office of Management and Budget (OMB), and the Department of State. Because the funding and other statistical data we collected from DEA and other agencies and used in this report were used primarily for background and descriptive purposes and were not directly related to our findings, conclusions, and recommendation, we did not independently validate or verify their accuracy and reliability. The five DEA domestic field division offices we visited are located in Los Angeles, CA; Miami, FL; New Orleans, LA; San Juan, Puerto Rico (the Caribbean Division); and Washington, D.C. We also visited one DEA district office located in Baltimore, MD, which is part of DEA’s Washington, D.C., Field Division. The three DEA foreign country offices we visited are located in La Paz, Bolivia; Bogota, Colombia; and Mexico City, Mexico. In Bolivia, we also visited resident offices in Santa Cruz and Trinidad and the Chimore base camp. (See app. I.) In Mexico, we also visited the Guadalajara Resident Office. These locations were judgmentally selected on the basis of geographic location, differences in the drug threat in these areas, and a variety of domestic and foreign drug enforcement operational characteristics. We also obtained information from the U.S. Attorneys’ Offices in Baltimore, Los Angeles, Miami, New Orleans, and Puerto Rico; from local police agencies in Baltimore; Los Angeles; South Miami, FL; Pine Bluff, AR; and Puerto Rico; and from State Department officials in Bolivia, Colombia, and Mexico. To determine what major enforcement strategies, programs, initiatives, and approaches DEA has implemented to carry out its mission in the 1990s, we collected and analyzed pertinent DEA, DOJ, and ONDCP documents. We also interviewed DEA headquarters officials, DEA officials in the selected domestic field offices and foreign country offices, and DOJ and ONDCP officials. We also collected ONDCP and DEA budget data for each fiscal year from 1990 to 1999, which we adjusted to constant 1999 dollars where appropriate. In addition, we obtained and analyzed DEA special agent work-hour statistics; case initiation data; statistical results (e.g., arrests, convictions, and seizures) of investigations; and other information relating to DEA’s enforcement programs. We did not evaluate the effectiveness of the individual strategies, programs, initiatives, and approaches discussed in chapter 2. To determine whether DEA’s strategic goals and objectives, programs and initiatives, and performance measures are consistent with the National Drug Control Strategy, we analyzed and compared DEA’s annual performance plans for fiscal years 1999 and 2000 with ONDCP’s 1998 and 1999 (1) National Drug Control Strategies, (2) National Drug Control Budget Summaries, and (3) Performance Measures of Effectiveness reports. We also reviewed DOJ’s (1) Strategic Plan for 1997-2002, (2) performance plans for fiscal years 1999 and 2000, and (3) Drug Control Strategic Plan. We interviewed DEA and ONDCP officials about their plans and how they interrelated. We used the Government Performance and Results Act as our basic criteria along with OMB, DOJ, and our guidance on the act, including OMB Circular A-11 and our guides for assessing agency annual performance plans and strategic plans. To determine how DEA’s fiscal year 1998 staffing estimates for its enforcement programs and initiatives were developed, we collected and analyzed information on the process, criteria DEA used, and staffing recommendations made during DEA’s budget formulation process. We interviewed DEA officials in headquarters and in the selected domestic field offices and foreign country offices, as well as DOJ, ONDCP, and OMB officials to obtain information on how staffing needs were determined and how the budget review process affected staffing estimates, requests, and allocations. We reviewed documents dealing with staffing recommendations and allocations; policies and procedures; DOJ, ONDCP, and OMB budget reviews; and congressional appropriations. We performed our work from December 1997 to May 1999 in accordance with generally accepted government auditing standards. In June 1999, we provided a draft of this report to the Attorney General and the Director of ONDCP for comment. We also provided relevant sections of the report to OMB and State Department officials for a review of the facts that pertain to those agencies. We received written comments on June 23, 1999, from the Deputy Administrator, DEA, which are discussed in chapters 2 and 3 and reprinted in appendix II. In addition, DEA provided a number of technical changes and clarifications, which we have incorporated throughout the report where appropriate. On June 21, 1999, ONDCP’s Acting Deputy Director, Office of Legislative Affairs, orally informed us that ONDCP reviewed the report from a factual standpoint because the overall conclusions and recommendation are directed at DEA. He stated that the report was factually correct from ONDCP’s perspective and provided a few technical clarifications, which we incorporated where appropriate. On June 21, 1999, OMB officials responsible for examining DEA’s budget orally communicated a few technical comments, which we incorporated in chapter 4. The State Department’s liaison for GAO informed us on June 21, 1999, that the Department had no comments. Since its creation in 1973, DEA has focused its efforts primarily on investigating the highest levels of national and international illegal drug trafficking. In addition, DEA has supported state and local law enforcement efforts directed at the lower levels of drug trafficking. In the 1990s, however, DEA revised its strategy to focus its operations on what it refers to as the “seamless continuum” of drug trafficking, from international drug trafficking organizations residing outside the United States to local gangs and individuals illegally selling drugs on city streets. Consequently, during the 1990s, DEA gave a higher priority than in the past and increased resources to working with and assisting state and local law enforcement agencies, including starting a new program to help combat drug-related violent crime in local communities. Concurrently, in the 1990s, DEA made the following enhancements to its already high priority enforcement operations directed at national and international drug trafficking organizations. DEA established the Kingpin Strategy, which evolved into the Special Operations Division (SOD), placing greater emphasis on intercepting communications between top-level drug traffickers and their subordinates (i.e., attacking the “command and control” communications of major drug trafficking organizations) to dismantle their entire trafficking operations. DEA started participating in two interagency programs to target and investigate major drug trafficking organizations in Latin America and Asia. DEA helped establish, train, and fund special foreign police units to combat drug trafficking in certain key foreign countries, primarily in Latin America. Since its establishment, DEA has directed its resources primarily toward disrupting or dismantling major organizations involved in interstate and international drug trafficking. DEA has concentrated on investigating those traffickers functioning at the highest levels of these enterprises, often by developing conspiracy cases for U.S. Attorneys to prosecute and seizing the traffickers’ assets. Federal drug control policymakers considered this investigative approach to be the most effective for reducing the illegal drug supply in the United States. Consistent with this approach, DEA’s operational strategy in the early 1990s was to identify and exploit trafficker vulnerabilities and to disrupt or dismantle their organizations by conducting investigations leading to (1) the prosecution, conviction, and incarceration of leaders and key players in drug organizations and (2) the seizure and forfeiture of the assets of these organizations. DEA’s enforcement operations were to focus and apply pressure in four principal areas: source (production overseas and in the United States), transit (smuggling of drugs and essential chemicals), domestic distribution (sales of illegal drugs in the United States), and proceeds (money and assets derived from distribution). DEA operations were also to control the distribution of chemicals used to manufacture illegal drugs and prevent the diversion of legally produced controlled substances. In 1994, the DEA Administrator undertook a review of DEA’s policies and strategies to ensure that DEA was appropriately responding to the drug trafficking problem and related violent crime. The results of the review included recommendations by DEA SACs and senior managers that DEA refocus its investigative priorities by increasing its efforts against domestic drug trafficking, including violent drug organizations, street gangs, local impact issues, regional trafficking organizations, and domestically produced illegal drugs, while, at the same time, continuing to investigate major national and international trafficking organizations. As a result, in 1995, the DEA Administrator established the Mobile Enforcement Team (MET) Program, which focuses a small percentage of DEA’s resources on drug-related violent crime in local communities. Then, in a 1997 memorandum to DEA’s field offices, he indicated that over the next 5 years DEA was to focus its operations on the “seamless continuum” of the organized crime systems that direct drug trafficking, with agencywide programs and initiatives directed at major regional, national, and international cases; violent drug organizations, gangs, and local impact issues; and domestically cultivated and manufactured illegal drugs. According to DEA, the international aspects of drug trafficking cannot be separated from the domestic aspects because they are interdependent and intertwined. The operations of major trafficking organizations can involve the cultivation and production of drugs in foreign countries, transportation to the United States, and eventual distribution on city streets. Accordingly, the Administrator emphasized that DEA was to target the highest level drug traffickers and their organizations, as well as violent, street-level drug gangs operating in communities. To implement this strategy, DEA was to pursue a vigorous international enforcement program, while domestically using the MET Program and other enforcement approaches to combat the threat and impact of drugs in local communities. The Administrator cited cooperation with other agencies as a guiding principle for all aspects of DEA’s international operations and domestic operations, which included assisting state and local law enforcement agencies with their most serious drug and drug-related violence problems. Although DEA has always worked formally and informally with state and local law enforcement agencies, it increased its involvement in, and devoted more resources to, task forces and other multiagency operations with state and local law enforcement agencies in the 1990s. Major DEA programs for working with and assisting state and local police on multiagency operations are the State and Local Task Force Program, MET Program, and Domestic Cannabis Eradication/Suppression Program. Through its State and Local Task Force Program, which originated in 1970 with DEA’s predecessor agency, DEA coordinates with state and local law enforcement agencies, shares information, participates in joint investigations, and shares assets forfeited federally as a result of cases made against drug dealers. In addition, state and local officers often receive drug investigation training and enhanced drug enforcement authority. Throughout the 1990s, DEA substantially increased the number of its state and local task forces and the number of special agents assigned to them. DEA’s budget for state and local task forces also increased substantially during this period. Table 2.1 shows DEA’s budget for its state and local task forces and the number of task forces in fiscal years 1991 through 1999. As the table indicates, DEA spent $45.7 million, in constant 1999 dollars, on this program in fiscal year 1991 and budgeted $105.5 million for fiscal year 1999. The total number of DEA-sponsored state and local task forces increased by about 90 percent during these years. Similarly, as shown in table 2.2, the number of special agents assigned to DEA-sponsored state and local task forces increased by about 84 percent between fiscal years 1991 and 1998, while the number of assigned state and local law enforcement officers increased by about 34 percent during the same time period. About 22 percent of DEA’s 4,309 special agents in fiscal year 1998 were assigned to state and local task forces, compared to about 14 percent of the total 3,542 special agents at DEA in fiscal year 1991. The amount of time spent by DEA special agents overall on state and local task forces also increased steadily in the 1990s. DEA special agents spent about 19.5 percent of all domestic investigative work hours on these task forces in fiscal year 1998 compared to about 9.2 percent during fiscal year 1990. Table 2.3 shows the number of cases, arrests, convictions, asset seizures, and drug seizures that resulted from the state and local task forces in fiscal years 1991 through 1998. DEA categorizes each case as local, regional, domestic, foreign, or international, according to the geographic scope covered. As shown in table 2.4, a little over 50 percent of the task force cases initiated in fiscal years 1997 and 1998 were local and regional. They involved suspected drug violators operating in the geographic areas covered by the DEA offices conducting the investigations, and most of them were local violators. Less than 10 percent of the task force cases targeted people suspected of drug trafficking on an international scale. The following are examples of state and local task force investigations conducted by the Los Angeles; Miami; New Orleans; and Washington, D.C., field divisions, respectively, that DEA considered to be successful. DEA’s task force in Santa Ana, CA, targeted a Mexican methamphetamine manufacturing and distribution organization operating throughout southern California. The investigation employed wiretaps of nine telephones, along with extensive surveillance, use of informants, and other investigative techniques. Primarily on the basis of information from the wiretaps, the task force conducted 11 raids in 4 cities resulting in the arrest of 27 Mexican nationals on state drug charges and the seizure of 26 pounds of methamphetamine, 25 gallons of methamphetamine in solution form (estimated to be the equivalent of 50 to 100 pounds of methamphetamine), 100 pounds of ephidrine in powder form, an estimated 164 pounds of ephidrine in solution form, other chemicals used to manufacture methamphetamine, 3 ounces of cocaine, and about $93,000 in cash. Two methamphetamine laboratories and four ephidrine extraction laboratories were seized and dismantled. In Operation Emerald City, DEA, along with state and local law enforcement agencies and state regulatory agencies, targeted a drug trafficking organization that was selling drugs in a Riviera Beach, FL, bar in 1997. Some of the biggest known drug dealers in the greater West Palm Beach area had used the bar and its attached property for drug dealing since the 1960s; and numerous murders, stabbings, drive-by shootings, and robberies had occurred there over the years. Two leaders of one drug organization had been arrested, and another organization had taken control of the bar. DEA obtained a court order allowing surreptitious entry of the bar and installation of covert closed circuit television cameras to record drug transactions, identify traffickers, and afford undercover officers the ability to buy drugs while under constant camera surveillance. As a result, 38 people were arrested for violations of federal and state drug laws. In addition, the bar’s liquor license was revoked, and the business and surrounding property were forfeited to the government. DEA’s REDRUM (“murder” spelled backward) task force group, which focuses on drug and drug-related homicide cases, targeted a violent heroin trafficking organization operating in New Orleans, LA. DEA conducted the investigation jointly with the New Orleans Police Department Homicide Division and the FBI. As a result of the investigation, which included wiretaps, surveillance, and debriefings of informants, 13 individuals were indicted on a variety of federal drug, firearms, and murder charges. Ten defendants pled guilty prior to trial, and the head of the organization was found guilty of all charges and received a life sentence. In addition, five homicides in the city of New Orleans were solved, and 359 grams of heroin and $60,000 in drug-related assets were seized. According to DEA, the investigation had a significant local impact by reducing violent crime and disrupting the flow of heroin into New Orleans. DEA’s Richmond District Office City Strike Force in Virginia learned about a trafficking organization bringing Colombian heroin to the Richmond and Columbus, OH, areas from New York. Through the arrest and debriefing of drug couriers, the task force obtained evidence regarding the organization’s distribution of more than 100 pounds of Colombian heroin in the Richmond area over approximately 1 year’s time. The task force arrested the 2 heads of the organization and 24 co-conspirators. Three kilograms of Colombian heroin and $17,000 were seized. In February 1995, DEA established the MET Program to help state and local law enforcement agencies combat violent crime and drug trafficking in their communities, particularly crime committed by violent gangs. This was consistent with the Attorney General’s Anti-Violent Crime Initiative, which was initiated in 1994 to establish partnerships among federal, state, and local law enforcement agencies to address major violent crime problems, including gangs. The MET Program was also consistent with ONDCP’s 1995 National Drug Control Strategy, which cited the program as an example of how federal agencies would help state and local agencies address drug trafficking and associated violence. According to DEA officials, federal assistance through the MET Program was designed to help overcome two challenges facing state and local agencies in drug enforcement: State and local police agencies did not have sufficient resources to effectively enforce drug laws. Local law enforcement personnel were known to local drug users and sellers, making undercover drug buys and penetration of local distribution rings difficult and dangerous. Unlike DEA’s traditional State and Local Task Force Program, previously discussed, in the MET Program, upon request from local officials, DEA deploys teams of special agents (referred to as METs) directly to communities affected by drug-related violence. The METs are based in DEA field divisions throughout the country. The METs are to work cooperatively with the requesting local law enforcement agency—sharing intelligence, coordinating activities, and sometimes combining staff and other resources—to target drug gangs and individuals responsible for violent crime. Since its creation in fiscal year 1995, funds for the MET Program have totaled about $173 million in constant 1999 dollars. Table 2.5 shows the MET budget, the number of active METs, and the number of agents authorized for those METs from the inception through fiscal year 1999. A typical MET is made up of 8 to 12 DEA special agents. Each MET operation starts with a request to the local DEA field office from a police chief, sheriff, or district attorney for assistance in dealing with drug-related violence. DEA then evaluates the scope of the problem and the capability of local law enforcement to address it. Each assessment is supposed to give particular attention to the violent crime rate in the requesting community and the impact of the identified drug group on the violence occurring there. Once DEA decides to deploy a MET, an action plan is to be developed, including identification of the suspects to be targeted. Following this initial planning, the MET is to conduct the deployment outfitted with the necessary surveillance and technical equipment. During a deployment, the MET is to work with the local law enforcement officials to investigate and arrest targeted violent drug offenders. According to DEA officials, the MET generally collects intelligence, initiates investigations, participates in undercover operations, makes arrests, seizes assets, and provides support to local or federal prosecutors. Evidence developed in MET investigations may also be used to prosecute the same individuals for related crimes, including murder, assault, or other acts of violence. According to DEA, each MET deployment plan establishes a time frame of between 90 and 120 days for completing the deployment. At the time of our review, DEA had 24 METs in 20 of its 21 domestic field divisions (with the Caribbean Division being the only exception). Table 2.6 shows the number of MET deployments and their results from the program’s inception in fiscal year 1995 through fiscal year 1998. The DEA offices in Los Angeles, Miami, New Orleans, and Washington, D.C., had completed 33 MET deployments at the time we made our visits during 1998. In conducting our work at these offices, we spoke with local police officials in selected cities—Baltimore, MD; Los Angeles, CA; Pine Bluff, AR; and South Miami, FL—who had requested MET deployments. The officials said they were pleased with the results of the MET deployments and that the accomplishments met their expectations. Following are summaries of the MET deployments in these four cities. A MET deployment conducted for about 2-1/2 months during 1996 in the Rampart area of Los Angeles targeted six violent street gangs: 18th Street, Mara Salvatruches, Orphans, Playboys, Crazy Riders, and Diamonds. In addition to the MET, the Los Angeles Police Department, the Bureau of Alcohol, Tobacco and Firearms, and the California Department of Corrections participated. The operation resulted in 421 arrests, including 144 by DEA’s MET, and seizures of 1,200 grams of heroin, 630 grams of cocaine, 104 pounds of marijuana, 28 weapons, and $70,000 in currency. According to a Los Angeles Police Department official, crime statistics (e.g., homicides and aggravated assaults) in the Rampart area fell immediately after the MET deployment was completed. However, DEA’s post-deployment assessment of violent crimes in the area, comparing the 6 months after the deployment ended to the 6 months prior to the deployment, showed that homicides increased from 42 to 47, aggravated assaults increased from 738 to 1,008, robberies increased from 1,091 to 1,275, and sex crimes increased from 130 to 207. In addition, the assessment indicated that drug sales appeared to increase shortly after the deployment was completed, approaching levels observed before the deployment started. It further indicated that a significant number of drug dealers reportedly changed the location of their distribution activities while remaining in the Rampart area. The South Miami Police Department requested DEA support in going after crack cocaine dealers. Initially, the police collected intelligence, purchased drugs, and arrested 17 street-level drug dealers. DEA’s MET then became involved. The MET bought crack cocaine from drug dealers who came out to sell the drugs after the initial arrests had been made. The MET, which deployed for about 3 months in 1998, made 13 arrests (7 for federal prosecution and 6 for state prosecution) and seized 386.7 grams of crack cocaine. South Miami police officials said some of those arrested had committed weapons violations and some violence (e.g., assaults, batteries, and armed robberies) in the past. According to DEA’s post-deployment assessment of violent crimes in the area, comparing the 6 months after the deployment ended to the 6 months prior to the deployment, homicides decreased from 1 to 0, assaults decreased from 11 to 4, aggravated batteries decreased from 9 to 3, and robberies decreased from 14 to 5. The assessment noted that illegal drug activity in the South Miami area had been greatly reduced. The assessment further noted that the availability of crack cocaine in the area had been reduced, as well as drug distribution in surrounding areas. A MET deployment conducted for 6 months during 1997 in Pine Bluff targeted violent organizations dealing in crack cocaine and methamphetamine. A total of 46 people were arrested, including 15 who were indicted by a federal grand jury. Four people were suspects in four different homicides. Six ounces of crack cocaine were seized, along with 15 vehicles and $118,026. The Pine Bluff Police Chief told us that the MET deployment had successfully helped reduce both the city’s homicide rate and its crack cocaine problem. However, DEA’s post-deployment assessment of violent crimes in the area, comparing the 6 months after the deployment ended to the 6 months prior to the deployment, showed that homicides increased from 4 to 7, assaults increased from 677 to 1,098, rapes increased from 44 to 51, and robberies increased from 168 to 190. A MET deployed in eastern Baltimore for 5 months during 1997 successfully targeted a drug organization—the M&J Gang—in a housing project. This case resulted in 81 arrests, many of which were already “in the works” before the MET deployment, according to Baltimore police officials we contacted. The officials said the MET deployment also reduced violent crime in that area. DEA’s post-deployment assessment showed fewer major crimes overall in the area during the 6-month period starting about 1 month before the deployment ended compared to the 6 months prior to the deployment. Shootings decreased from 527 to 389, aggravated assaults decreased from 3,273 to 2,950, robberies decreased from 4,888 to 3,889, burglaries decreased from 5,775 to 5,684, larcenies decreased from 16,475 to 15,976, and stolen automobiles decreased from 4,851 to 3,213. However, murders increased from 143 to 156, and rapes increased from 175 to 202. DEA’s assessment also reported that narcotic activity in two areas covered by the deployment was significantly reduced, and gang- related criminal activity had decreased in the sections of Baltimore controlled by the M&J Gang and another drug gang. DEA officials we met with in the four field division offices told us that although MET deployments typically focused on local violent drug offenders, they sometimes led to investigations of higher level drug traffickers. In response to our request for quantitative data on this, a DEA headquarters official responded that it is difficult for DEA to provide statistics on the exact number of deployments that have led to investigations of higher level drug traffickers through MET operations because this information was not systematically maintained in an automated database. However, DEA did provide us with some examples of MET deployments that led to higher level drug traffickers. For example, DEA reported that a MET deployed from DEA’s Phoenix office identified connections in Mexico to MET targets in Lake Havasu City, AZ, and family members in southern California. The resulting intelligence revealed that this Mexican-based organization was responsible for smuggling precursor chemicals from Mexico to clandestine laboratory sites in southern California, where methamphetamine was produced. Multiple-pound quantities of methamphetamine were transported to Lake Havasu City. According to DEA, the organization was also transporting large shipments of methamphetamine to Oregon, Washington, Colorado, and New Mexico. As shown in table 2.7, the MET deployments conducted in fiscal years 1995 through 1998 resulted in mostly local and regional drug cases. Local and regional cases involve suspected drug violators operating in the geographic areas covered by the DEA offices conducting the investigations. As expected, given the MET program’s purpose, local cases were the single largest category, making up about 47 percent of the total during that period. About 6 percent of the MET cases involved criminals trafficking in drugs on an international scale. After completing a MET deployment, DEA may carry out efforts to help a community maintain a lower level of drug trafficking and violent crime. For example, DEA offers follow-up training to those communities carrying out drug demand reduction activities. In addition, if feasible, DEA may respond to a request for the re-deployment of a MET to prevent drug- related violent crimes from resurging to the level that existed prior to the initial deployment. In a June 1998 memorandum to all SACs of domestic field offices, DEA headquarters directed that the field divisions proactively promote the MET Program to increase the number of requests for deployments. The memorandum stated that despite the MET Program’s success, much more could and should be done to stimulate interest in the program on the part of state and local law enforcement agencies. The DEA offices were instructed to collect crime statistics in specific areas within their geographic boundaries to determine the existence of drug trafficking problems related to violent crime. After making such determinations, the DEA SAC or a designated assistant was to contact the local police chief or sheriff in those areas to explain the benefits of a MET deployment in their jurisdictions and inform them of the availability of MET resources. The memorandum noted that each SAC or assistant was expected to visit local police officials each month. The memorandum further noted that each office’s proactive MET Program activities would be a significant factor in the SACs’ annual performance appraisals. According to DEA, the proactive contacts have generated numerous additional requests for MET assistance, and the majority of recent deployments have been the result of such proactive contacts. With regard to the results of the MET Program, DEA officials reported decreases in the number of murders, robberies, and aggravated assaults in local areas covered by the program based on an analysis of local crime statistics gathered from the targeted geographic locations before and after 133 deployments that had been completed as of April 20, 1999. According to DEA, its post-deployment assessments cumulatively showed a 12- percent decline in murders, 14-percent decline in robberies, and 6-percent decline in aggravated assaults in the 133 deployment areas during the 6 months after the deployments ended when compared to the 6 months prior to the deployments. Further, DEA’s analysis showed that 28 of the 133 deployment areas had decreases in all 3 major violent crime categories (i.e., murder, robbery, and aggravated assault) during the 6 months after the deployments ended, while only 5 of the areas had increases in all 3 crime categories. (These five deployments included two of the examples summarized above). In commenting on these results, DEA noted that the effectiveness of MET deployments in removing a specific, targeted violent drug gang, for example, cannot by itself eliminate a community’s drug trafficking problems because DEA cannot continue to control the deployment areas to prevent other drug dealers from filling the void that a MET deployment might have created. DEA started assisting state and local law enforcement agencies in their efforts to control domestically grown marijuana in 1979, when it helped agencies in California and Hawaii. DEA’s Domestic Cannabis Eradication/Suppression Program was established in 1982 to more formally help the states eradicate domestic marijuana while building cases leading to the arrest and prosecution of growers. The program became active in all 50 states in 1985. To implement the program, DEA provides funds to state and local law enforcement agencies. The funds are to be used by these agencies for program expenses, such as aircraft rentals and fuel, vehicles, equipment, supplies, and overtime payments for state and local officers working on eradication operations. As table 2.8 indicates, funds provided by DEA for the program increased about 177 percent, in constant 1999 dollars, from fiscal year 1990 to fiscal year 1999. DEA encourages state and local agencies to assume the major responsibility for eradicating domestic marijuana. In coordinating the program in each state, DEA is to assist efforts to detect and eradicate marijuana plants (including coordinating the support of other agencies, arranging for needed equipment, and helping with surveillance); exchange intelligence; investigate marijuana trafficking organizations; and provide training. Table 2.9 shows the statistical results of DEA’s Domestic Cannabis Eradication/Suppression Program for 1990 through 1998. In addition to its State and Local Task Force, MET, and Domestic Cannabis Eradication/Suppression Programs, DEA also participates in other multiagency task force operations involving state and local law enforcement agencies. These include the following: The Organized Crime Drug Enforcement Task Force (OCDETF) Program is coordinated by U.S. Attorneys. This program is designed to promote coordination and cooperation among federal, state, and local law enforcement agencies involved in drug enforcement in each task force region. The goal of the OCDETF Program is to identify, investigate, and prosecute members of high-level drug trafficking organizations and related enterprises. In fiscal year 1998, DEA sponsored 847, or 62 percent, and participated in 1,096, or 81 percent, of the 1,356 OCDETF investigations that were initiated. DOJ reimburses DEA for its expenditures on OCDETF investigations. For fiscal year 1998, DEA was reimbursed $94.4 million for the OCDETF Program. The High Intensity Drug Trafficking Area (HIDTA) Program is administered by ONDCP. The mission of the HIDTA Program is to coordinate drug control efforts among federal, state, and local agencies in designated areas in order to reduce drug trafficking in critical regions of the United States. At the time of our work in September 1998, ONDCP had designated 20 areas as HIDTAs. According to ONDCP, a HIDTA organization typically consists of a major task force led by federal agencies, drug and money laundering task forces led by state or local agencies, a joint intelligence center and information-sharing network, and other supporting initiatives. DEA receives funds from ONDCP based upon its participation in the HIDTA Program. For fiscal year 1998, DEA received $14.8 million in direct HIDTA funding. Since it was established, DEA’s highest priority has been to investigate major drug trafficking organizations, both domestic and foreign, responsible for supplying illegal drugs consumed in the United States. Over the years, DEA has adopted various techniques for focusing its efforts on such investigations. In 1992, DEA started using an investigative approach designed to identify and target drug kingpins and their supporting infrastructures, primarily through the use of wiretaps and other types of electronic surveillance within the United States and the use of intelligence information. DEA called this approach the Kingpin Strategy. This approach, which has led to the dismantling or disruption of major trafficking organizations, was later adopted by SOD when it was established in 1995. More recently, DEA has established the Regional Enforcement Team (RET) initiative to address regional, national, and international drug trafficking in small towns and rural areas within the United States. Developed in 1992, the Kingpin Strategy targeted major Colombian cocaine and Southeast and Southwest Asian heroin trafficking organizations. This strategy was DEA’s top priority and its primary enforcement approach for addressing the national priority of reducing the availability of illegal drugs in the United States. The Kingpin Strategy primarily targeted cocaine trafficking organizations operating out of Medellin and Cali, Colombia, with most of its focus on one organization referred to as the Cali cartel. According to DEA, the heads of the Colombian organizations tightly controlled all aspects of their operations and telephoned subordinates to give directions. DEA concluded that this was a weakness in the operations of these organizations. DEA decided to exploit this weakness by monitoring their communications and analyzing telephone numbers called to identify the kingpins and their key subordinates for U.S. and/or foreign investigation, arrest, and prosecution and for seizure of their domestic assets. The Office of Major Investigations at DEA headquarters was responsible for implementing the Kingpin Strategy. Various intelligence, financial, and operational functions were consolidated within this office to facilitate focusing DEA’s investigative resources and capabilities on targeted kingpin organizations (TKO). The office disseminated tips and leads, collected from intelligence sources worldwide, to help agents in the field carry out investigations and enforcement activities. The office centrally directed, coordinated, oversaw, and funded investigations that were being carried out in multiple U.S. cities and foreign countries in cooperation with state, local, and foreign police. DEA’s SOD and its investigative approach evolved out of the Office of Major Investigations and the Kingpin Strategy. According to DEA, the Kingpin Strategy was enhanced by the creation of SOD as a separate division. SOD was established at DEA headquarters in August 1995 and given its own budget and additional staff.As with the Kingpin Strategy, SOD’s approach focuses on the command and control communications of major drug trafficking organizations. However, a major difference is that its scope was expanded beyond Colombian cocaine and Southeast and Southwest Asian heroin trafficking organizations to coordinate and support investigations of major organizations trafficking in methamphetamine and Colombian heroin, as well as organizations trafficking illegal drugs along the Southwest Border. SOD’s primary emphasis currently is Colombian and Mexican organizations responsible for smuggling illegal drugs into the United States. Another major difference from the Kingpin Strategy is that representatives from other law enforcement agencies, including the FBI and U.S. Customs Service, are detailed to SOD. The FBI has had agents detailed to SOD since 1995, and the Deputy SAC of SOD is an FBI agent. Similarly, the Customs Service has detailed agents to SOD since 1996. Most of SOD’s workload supports cases being conducted by DEA field offices. However, the SOD SAC told us that the workload was increasingly supporting FBI and Customs Service cases. The intelligence agencies and the Department of Defense (DOD) also participate by providing drug intelligence to SOD. In addition, DOJ’s Narcotics and Dangerous Drugs Section participates by providing legal advice to SOD on investigations. Like the Kingpin Strategy, SOD’s investigative approach and initiatives are to support domestic and foreign investigations of major drug traffickers and trafficking organizations in two principal ways. First, SOD is to disseminate tips and leads collected from intelligence sources worldwide to help agents in the field carry out investigations and enforcement activities. Second, SOD is to assist agents in building and coordinating multijurisdictional drug conspiracy cases that are based primarily on the use of wiretaps. Multijurisdictional efforts, such as Operation Reciprocity (described later) with 35 wiretaps in 10 U.S. cities, can involve many different individual investigations across the country. In May 1999, the SOD SAC estimated that SOD was supporting and coordinating about 240 cases throughout the United States. He said that SOD typically had approximately six to eight ongoing major operations at any one time, each having multiple related cases. Similar to the Kingpin Strategy, SOD does not control the cases that it supports; rather, decisionmaking on cases is left to field supervisors and agents. According to DEA officials, if SOD determines that field offices in different parts of the country are conducting investigations related to the same major drug trafficking organization, it attempts to bring the responsible agents together to develop the best cases for prosecution. In so doing, it is to coordinate and guide the agents’ efforts, including their intelligence and electronic surveillance operations, and assist with intelligence collection and analysis. SOD essentially funds the same types of investigative activities as the Office of Major Investigations funded under the Kingpin Strategy. According to the SOD SAC, SOD provides funds to DEA field offices primarily for conducting electronic surveillance in support of investigations. It also funds payments for informants and drug purchases if doing so is essential to an investigation. However, he said it does so only when an electronic surveillance is being conducted or planned and only in connection with an ongoing case with which SOD is involved. (For example, in the course of an investigation, an agent may acquire a phone number that is determined to be connected with a current SOD-funded investigation.) SOD does not fund individual FBI and Customs drug investigations, but it does support some of those investigations through its various activities. SOD is responsible for the oversight of and guidance for DEA’s Title III (electronic surveillance) program. In Title III of the Omnibus Crime Control and Safe Streets Act of 1968, Congress set forth the circumstances under which the interception of wire and oral communications may be authorized (P.L. 90-351,18 USC 2510, et seq.). SOD is to help special agents in the field focus their intercept operations on the best available targets, choose the best telephone numbers for intercept, correctly conduct the intercepts, make the best use of collected information, and make the most efficient use of transcribers and translators. SOD also is to send teams to the field to assist special agents with their wire intercept operations. According to DEA officials and data, since the Kingpin Strategy and SOD initiatives have been in operation, DEA has greatly increased the number of wiretaps and other electronic surveillances it conducts. The number of electronic surveillance court orders requested and conducted by DEA, as shown in table 2.10, increased by 183 percent; and the number of facilities (e.g., telephone, pager, and fax machine) covered by the orders increased by 158 percent, from fiscal year 1990 to fiscal year 1998. Most noteworthy is that the number of orders increased by 30 percent between fiscal years 1991 and 1992 after the Kingpin Strategy was initiated, and they increased by 65 percent between fiscal years 1995 and 1996 after SOD was established. DEA’s Special Intelligence Division is to support SOD’s operations by collecting, analyzing, and disseminating intelligence and other information from a variety of sources. For example, the unit is to analyze and disseminate information from telephone records and access and disseminate information from DEA, FBI, and Customs computerized drug intelligence systems. The unit has expanded both in size and computer and other technological capability. There were 186 staff in October 1998, including DEA, FBI, Customs, DOD, and contractor personnel. According to the SOD SAC, although DEA did not systematically compile results data on all of the Kingpin and SOD operations, cases supported, and leads disseminated, DEA has used both initiatives to successfully dismantle or disrupt drug trafficking organizations responsible for large amounts of illegal drugs brought into the United States. For example, according to DEA, the Kingpin Strategy contributed to dismantling the Cali cartel, which DEA considered the most powerful criminal organization that law enforcement has ever faced. Since 1995, all of the top Cali cartel leaders have been captured by or surrendered to the Colombian National Police (CNP), with the exception of one who was killed in a shoot-out with CNP at the time of his arrest. According to DEA, evidence gathered through years of investigations by DEA and other federal, state, and local law enforcement agencies and CNP led to the identification, indictment, arrest, conviction, and incarceration of the cartel leaders and some of their subordinates on drug charges in Colombia and the United States. According to DEA, a number of other successful operations have resulted from the Kingpin Strategy and SOD initiatives. These include the following: Operation Tiger Trap was a joint operation carried out by DEA and the Royal Thai Police in 1994. Tiger Trap produced U.S. indictments against members of the 20,000-man Shan United Army, a heroin TKO that operated the principal trafficking network in the Golden Triangle area of Thailand, Burma, and Laos for decades. Zorro I and Zorro II were multijurisdictional operations involving DEA, the FBI, Customs Service, and numerous state and local law enforcement agencies. Zorro I targeted Colombian drug traffickers based in Cali, Colombia, and their key subordinates operating in Los Angeles, New York, and Miami. Zorro I operated from 1992 to 1994 and included 10 DEA domestic field divisions. Zorro II targeted Mexican transportation groups used by the Colombians, as well as Colombian distribution cells located throughout the United States. It operated from 1995 to 1996 and included 14 DEA field divisions. The two operations relied heavily on the use of wiretaps. There were 117 wiretaps conducted, generating leads that identified Colombian distribution cells, Mexican traffickers’ command and control networks, money laundering routes, cocaine cache sites, and other important information. According to DEA, these operations disrupted both Colombian and Mexican organizations. Specifically, Zorro I resulted in 209 arrests, 6.5 tons of cocaine seized, and $13.5 million seized. Zorro II resulted in 182 arrests, 5.7 tons of cocaine and 1,018 pounds of marijuana seized, $18.3 million seized, and $2.5 million in assets seized. Operation Limelight was a multijurisdictional operation involving DEA, Customs, and numerous state and local law enforcement agencies. The operation targeted a Mexican drug transportation and distribution organization, which intelligence indicated was responsible for importing over 1-½ tons of cocaine monthly into the United States. The operation, which ran from 1996 to 1997, included the use of 37 wiretaps and other electronically generated intelligence, which helped identify groups in Houston and McAllen, TX; Los Angeles, San Diego, and San Francisco, CA; New York, NY; and Chicago, IL. The operation resulted in 48 arrests, 4 tons of cocaine seized, 10,846 pounds of marijuana seized, and $7.1 million seized. Operation Reciprocity was a multijurisdictional investigation involving DEA, the FBI, Customs Service, and numerous state and local agencies. In this operation, DEA combined several independent, but related, investigations being simultaneously conducted by federal, state, and local agencies into one investigation and helped other offices start investigations of other subjects by providing leads. The operation focused on two independent group heads based in the Juarez, Mexico, area who were responsible for importing, transporting, and distributing more than 30 tons of cocaine from Mexico to Chicago and New York. The operation involved 35 wiretaps and other electronically generated intelligence information in 10 cities. The operation, which ran from 1996 to 1997, resulted in 53 arrests, 7.4 tons of cocaine seized, 2,800 pounds of marijuana seized, and $11.2 million seized. According to DEA, information from some of the above SOD operations and other intelligence sources indicates that some major drug trafficking organizations are adapting to drug law enforcement efforts in large U.S. cities by shifting their operations to small towns and rural areas within the United States. DEA investigations and other information have provided evidence that these trafficking organizations have established command and control centers, warehouses, and drug transshipment points in many small communities. Consequently, according to DEA, these communities have become major distribution centers, as well as production centers in some cases, for illegal drugs, such as cocaine, heroin, methamphetamine, and marijuana. To respond to this threat, DEA established the RET initiative in fiscal year 1999, for which Congress provided $13 million and authorized 56 positions. The RETs are designed to be proactive, highly mobile regional investigative teams whose mission is to (1) target drug organizations operating or establishing themselves in small towns and rural areas where there is a lack of sufficient drug law enforcement resources and (2) better develop and exploit drug intelligence developed by SOD and other sources. The RET initiative’s objective is to identify and dismantle these drug organizations before they become entrenched in the communities. The RETs are similar to METs, previously discussed, only in that they are mobile teams. The RET initiative differs significantly from the MET Program in that the RETs are to only target major drug violators operating at the regional, national, or international level; while the METs, upon request from local authorities, are to assist urban and rural communities in investigating and eliminating drug-related violence. DEA is implementing two RETs, which are to become operational in September 1999, one in Charlotte, NC, and one in Des Moines, IA. According to DEA, each RET will consist of 22 personnel, including 15 special agents. In addition, the RETs are to be provided with the investigative equipment and vehicles needed to ensure a high degree of mobility and capability to support the performance of even the most complex investigations. According to DEA, international drug trafficking organizations have become the most dangerous organized crime forces in the world, and Colombian and Mexican organizations are the most threatening to the United States. DEA documents state that such international trafficking organizations are often headquartered in foreign countries where there is little or no potential for extradition to the United States. Because of the international nature of drug trafficking, DEA had 79 offices in 56 foreign countries as of December 1998. DEA opened 16 offices in 15 foreign countries, and closed 4 offices in 4 countries, from fiscal years 1990 through 1998. Each foreign DEA office is part of the U.S. Embassy’s country team. As a country team member, how DEA operates in a foreign country must be consistent with the embassy’s Mission Program Plan, which is a strategic plan required by the State Department for U.S. government activities within each country where there is a U.S. Embassy. Mission Program Plans discuss the embassies’ human rights, democratic, economic, law enforcement, and other goals, strategies, and objectives, including efforts to combat drug trafficking. The plan for each country is to be reviewed and approved by DEA and other agencies represented on the country team. DEA cannot operate in foreign countries as it does in the United States because of various limitations. For example, DEA said its agents cannot make arrests or conduct electronic surveillances in any foreign country, nor can they be present during foreign police enforcement operations without a waiver from the Ambassador. DEA’s primary goal in the countries where it operates is, through bilateral law enforcement cooperation, to disrupt and/or dismantle the leadership, command, control, and infrastructure of drug trafficking organizations that threaten the United States. To accomplish this goal, DEA engages in cooperative investigations and exchanges intelligence with its host nation counterparts. In addition, DEA provides training, advice, and assistance to host nation law enforcement agencies to improve their effectiveness and make them self-sufficient in investigating major drug traffickers and combating the production, transportation, and distribution of illegal drugs. In addition to SOD operations, DEA has been a participant in two interagency investigative programs that were established during the 1990s to address drug trafficking in certain foreign countries where major trafficking organizations were based. They are the Linear and Linkage Approach Programs. This program was established in 1991 as a U.S. interagency forum to disrupt and dismantle the key organizations in Latin America responsible for producing and shipping illegal drugs to the United States. The program’s foundation rests on three basic tenets: focus law enforcement and intelligence community resources on key targets, foster community collaboration, and enhance host nation capabilities. The Washington Linear Committee, which comprises 15 organizations and is cochaired by DEA, was designed to help better coordinate the counterdrug efforts of U.S. Embassy country teams, field-based regional intelligence centers, and U.S. Military Commands. The Linear Approach Program initially focused on Colombian and Mexican cocaine organizations. It has since been expanded to include other Latin American trafficking organizations that are primary recipients of significant amounts of drugs directly from the source countries of Bolivia, Colombia, and Peru. Some of these organizations may traffic in heroin and/or methamphetamine, in addition to cocaine. DEA reported that, for the period of 1994 through 1998, 21 main targets of the Linear Program and 22 associates had been arrested. All of the Cali cartel leaders who were arrested as part of the previously discussed Kingpin Strategy were also primary targets of the Linear Approach Program. This program was established in 1992 and has been DEA’s principal international strategy to address the heroin threat from Asia. The program is cochaired by DEA. It focuses law enforcement and intelligence community resources on efforts to disrupt and dismantle major Asian trafficking organizations producing heroin for distribution to the United States. Linkage Approach Program targets are to have a significant role in one of the Southeast or Southwest Asian heroin trafficking organizations and be subject to extradition to, and arrest and prosecution in, the United States. According to DEA, prior to this program, which was designed to make use of U.S. drug conspiracy laws, major Southeast and Southwest Asian traffickers exploited the lack of conspiracy laws in their own countries by insulating themselves from the actual drugs. The Linkage Program uses a multinational and multiagency approach to gather evidence for use in the U.S. judicial system, securing indictments in federal courts, and pursuing the extradition of the targeted traffickers to the United States for prosecution. DEA reported that through 1998, 33 Linkage Approach Program targets had been arrested, 10 defendants had been extradited to the United States, and 1 defendant was incarcerated pending extradition. In 1996, DEA initiated its Vetted Unit Program, under which foreign police participate in special host country investigative and intelligence collection units in selected foreign countries. According to DEA officials, the foreign police participants are screened and then trained by DEA with the intention of enhancing their professionalism and creating an atmosphere of increased trust and confidence between participating foreign police and DEA agents working with the vetted units. DEA believes that these units will (1) enhance the safety of DEA agents in those participating countries and (2) increase the sharing of sensitive information between DEA and foreign police. All foreign police participating in the DEA program must be successfully “vetted,” that is, pass a computerized criminal background investigation, a security questionnaire and background interview, medical and psychological screening, polygraph testing, and urinalysis testing. They then attend a 4- to 5-week DEA investigative training course in Leesburg, VA. After they are screened and trained, the vetted foreign police are to receive ongoing training as well as random polygraph and urinalysis testing. The Vetted Unit Program initially began in Mexico in May 1996. After the Government of Mexico approved the concept, 21 Mexican police were screened and then trained by DEA. The vetting process was completed in November 1996, and the Mexico National Sensitive Investigative Unit (SIU) became operational in January 1997. DEA then expanded the program to other countries. For fiscal year 1997, Congress appropriated $20 million to support vetted units in Bolivia, Colombia, Mexico, and Peru. In March 1997, the DEA Administrator authorized immediate implementation of vetting in Bolivia, Colombia, and Peru. He also authorized programs in Brazil and Thailand for 1998. The $20 million appropriation for vetted units in fiscal year 1997 is now part of DEA’s budget base and has recurred each subsequent fiscal year. According to DEA, as of October 1998, vetted units, which were designed to engage in intelligence collection, investigations of drug traffickers, or both, were operational in Bolivia, Colombia, Mexico, and Peru. Program start-up costs in fiscal years 1997 and 1998 amounted to a total of $7.4 million for Bolivia, $5.3 million for Colombia, $4.6 million for Mexico, and $4.4 million for Peru. According to a DEA official, it took an average of about 6 months to complete the screening and training of the foreign police from the time they were identified to DEA as candidates selected by the host governments for the program, although the actual length of time varied because of factors such as limited availability of polygraphers. As shown in table 2.11, the number of vetted officers varied by country. Each vetted unit had one or two DEA agents assigned for assistance, liaison, and case support. The following summarizes the status and accomplishments of the existing vetted units under the program as of September 30, 1998, according to DEA. Bolivia had four SIUs with vetted personnel. Three SIUs each had 25 Bolivian National Police, and a fourth unit had 97 personnel. The SIUs were located in La Paz, Santa Cruz, and Cochabamba. Two of the SIUs collect intelligence, conduct investigations, and arrest targeted drug traffickers, while the other two SIUs concentrate primarily on collecting intelligence. DEA reported that the Bolivian SIUs’ efforts through fiscal year 1998 resulted in 1,206 arrests and seizures of 3,201 kilograms of cocaine hydrochloride (HCL), 5,392 kilograms of cocaine base, and $15.8 million in assets. Colombia had 4 vetted units consisting of 112 members. The Major Investigations Unit in Bogota had 39 personnel, including both investigators and prosecutors. This unit focused on drug trafficking in the major cities of Colombia, such as Cali, Medellin, and Barranquilla. The Financial Investigation Unit had 14 investigators who focused on money laundering in financial institutions in Colombia’s major cities. The Intelligence Group consisted of 39 personnel headquartered in Bogota and operating in the major drug producing regions. This unit collected intelligence to support investigations of Colombian drug trafficking organizations by other CNP units. The fourth vetted unit, consisting of 20 members, monitors the diversion of precursor substances from legitimate manufacturers for the production of illegal drugs. DEA reported that the Colombian vetted units’ efforts through fiscal year 1998 resulted in 63 arrests and seizures of 6,398 kilograms of cocaine HCL and cocaine base, 6 kilograms of heroin, and $250,000 in U.S. currency. Mexico had 3 vetted units made up of 232 vetted and trained personnel. The Mexico National SIU, operating out of Mexico City, had 14 Mexican Federal Narcotics Investigators assigned to collect intelligence on Mexican drug traffickers. The Border Task Forces had 106 Mexican Federal Narcotics Investigators. The task forces operated out of regional headquarters in Tijuana, Ciudad Juarez, and Monterrey—all along the U.S.- Mexico Border—and Guadalajara. The task forces had a mission similar to the SIU, but the task force investigators were also responsible for executing warrants and making arrests. The narcotics section of the Organized Crime Unit was made up of 112 Mexican federal attorney- investigators and narcotics investigators. This unit’s mission was to use information from court-ordered electronic intelligence collection to investigate high-level drug trafficking groups, as well as drug-related money laundering groups, throughout Mexico. The unit’s headquarters was in Mexico City, but the assigned personnel were often located in other cities. DEA did not report the number of arrests or seizures for the Mexican vetted units, but noted there had been arrests made in three major organizations, including one of the largest drug cartels in Mexico. Peru had 2 vetted units, with a total of 135 personnel. One unit, an intelligence group, consisted of 52 vetted personnel and specialized in collecting intelligence and targeting drug traffickers to support the second unit, the investigation group with 83 vetted police. Both units were headquartered in Lima and operated throughout the cocaine production regions of Peru. DEA reported that the Peruvian vetted units’ efforts through fiscal year 1998 resulted in 199 arrests and seizures of 819 kilograms of cocaine HCL, 2,297 kilograms of cocaine base, 4,350 gallons of precursor chemicals, and numerous weapons and ammunition. In commenting on a draft of our report, DEA officials informed us that as of April 1999, 2 vetted units with 25 and 75 vetted personnel, respectively, were fully operational in Thailand; 1 vetted unit with a total of 16 vetted personnel was operational in Brazil; and vetted antinarcotics police were expected to be operational in Pakistan in early fiscal year 2000. The officials also noted that assessments were scheduled for Ecuador and Nigeria in May and June, respectively, to examine the future suitability of vetted units in those countries. DEA expanded its enforcement strategy in the 1990s to focus its operations on what it refers to as the seamless continuum of drug trafficking. It placed emphasis on investigating gangs, drug dealers, and drug-related violence in local communities while continuing to target higher level drug traffickers involved in major national and international drug trafficking organizations. DEA’s programs and initiatives discussed in this chapter—for example, its state and local task forces, its MET Program, SOD’s initiatives, and its foreign operations—are consistent with DEA’s mission and responsibilities to enforce the nation’s drug laws and bring drug traffickers to justice, as described in chapter 1. In carrying out its strategy, DEA’s domestic enforcement efforts placed more emphasis on, and devoted more resources to, assisting and working with local law enforcement agencies than in the past. Consequently, funds and staff devoted to DEA’s State and Local Task Force Program increased in the 1990s. Also, although not in substantial numbers in comparison to DEA’s total dollar and staff resources, DEA began and continued to fund and dedicate agents to the MET Program during the 1990s. These programs targeted drug traffickers operating primarily at the local and regional levels. DEA provided examples of what it considered to be successful program operations at these levels and reported various program results, including federal and state arrests and convictions and seizures of drugs and assets. To improve the effectiveness of its domestic and international efforts directed at national and international drug trafficking organizations in the 1990s, DEA established and invested increased resources in SOD to continue and enhance the investigative approach initiated under its former Kingpin Strategy. SOD, like the Kingpin Strategy, emphasizes targeting the command and control communications of major traffickers. Consequently, the number of DEA electronic surveillances rose significantly in the 1990s. DEA documented the results of some Kingpin and SOD operations that it considered to be successful in disrupting and dismantling major national and international trafficking organizations. However, DEA did not compile results data on all Kingpin and SOD operations, cases they supported, or leads they disseminated. DEA also made changes to improve its foreign efforts directed at international drug trafficking organizations. In this regard, it has participated in two major interagency programs established in the 1990s to target major organizations in Latin America and Asia. The programs have led to the arrests of some high-level drug traffickers. In addition, the specially trained vetted units of foreign police initiated in recent years by DEA may help increase the sharing of information and the trust level between DEA and foreign police participating in those units. This, in turn, may help DEA and its foreign counterparts in targeting major traffickers and disrupting and dismantling trafficking organizations based in the participating foreign countries, as indicated by the initial results reported by DEA. In its written comments on a draft of this report, DEA stated that, overall, the report provides a detailed and factual background of DEA strategies and special operations. DEA also provided a number of technical comments and clarifications, which we incorporated in this chapter and other sections of this report. DEA’s strategic goals and objectives, along with its enhanced programs and initiatives in the 1990s discussed in chapter 2, are consistent with the strategic goals of ONDCP’s National Drug Control Strategy. Both the National Strategy and DEA hope to reduce illegal drug supply and drug- related crime and violence by disrupting or dismantling drug trafficking organizations. The National Strategy contains mid- and long-term measurable performance targets for 2002 and 2007 that identify the extent to which the National Strategy seeks to disrupt and dismantle drug trafficking organizations. However, DEA has not yet established comparable measurable performance targets for its operations. Throughout this chapter, we use footnotes to explain various planning and performance measurement terms as defined by OMB, ONDCP, and DOJ. We also include a glossary at the end of this report, which provides an alphabetical listing of the various planning and performance measurement terms used in this report and their definitions. DEA’s strategic goals and objectives, and enhanced programs and initiatives in the 1990s for carrying out its mission are consistent with the National Strategy’s strategic goals and objectives defining a 10-year commitment to reduce drug abuse. DEA’s mission, as described in chapter 1, is an important element of the National Strategy and DEA, through the implementation of its programs and initiatives as discussed in chapter 2, is a major participant in the National Strategy. As discussed below, we reviewed the National Strategy’s strategic goals and objectives, and compared them with DEA’s strategic goals and objectives and its programs for consistency. ONDCP has produced National Strategies annually since 1989. Since 1996, the National Strategy has included five strategic goals (listed in ch. 1) and related strategic objectives. These goals and objectives are the basis for a long-term national antidrug effort aimed at reducing the supply of and demand for illicit drugs and the consequences of drug abuse and trafficking. The goals define the major directives of the strategy. The objectives, which are more narrowly focused, stipulate the specific ways in which goals are to be obtained. The 1998 strategy provided a 10-year plan to reduce illegal drug use and availability by 50 percent by the year 2007. The 1999 National Strategy also continued the goals but eliminated 1 objective, reducing the total number of objectives to 31. The National Strategy is intended to guide the approximately 50 federal agencies with drug control responsibilities. DEA has significant responsibilities for helping to achieve the following three National Strategy goals. Strategy goal 2: Increase the safety of American citizens by substantially reducing drug-related crime and violence. Strategy goal 4: Shield America’s air, land, and sea frontiers from the drug threat. Strategy goal 5: Break foreign and domestic drug sources of supply. For these 3 strategy goals, the National Strategy has 15 supporting objectives, at least 10 of which relate to DEA. Table 3.1 identifies the strategy goals and objectives for which DEA has responsibilities. Recently, as part of its reauthorization legislation, ONDCP became responsible for monitoring the consistency between the drug-related goals and objectives of drug control agencies to ensure that their goals and budgets support and are fully consistent with the National Strategy. In its National Drug Control Budget Summary for 1999, ONDCP reported that DEA has various programs and initiatives that support strategy goals 1, 2, 4, and 5. DEA’s most recent planning document is its performance plan for fiscal year 2000,which it issued in February 1999, in response to the Government Performance and Results Act of 1993(the Results Act).That plan contains information on DEA’s vision, mission, strategic goals, strategic objectives, and performance indicators. DEA listed three strategic goals and nine strategic objectives for carrying out its mission: DEA strategic goal 1—disrupt/dismantle the leadership, command, control, and infrastructure of drug syndicates, gangs, and traffickers of illicit drugs; DEA strategic goal 2—reduce the impact of crime and violence that is the result of drug trafficking activity by providing federal investigative resources to assist local communities; and DEA strategic goal 3—facilitate drug law enforcement efforts directed against major drug trafficking organizations by cooperating and coordinating with federal, state, local, and foreign law enforcement and intelligence counterparts. Along with its strategic goals, DEA listed the following nine strategic objectives: DEA strategic objective 1—-attack the command and control of international and domestic drug trafficking organizations through the arrest, prosecution, conviction, and incarceration of their criminal leaders and surrogates; DEA strategic objective 2—concentrate enforcement efforts along the Southwest Border to disrupt, dismantle, and immobilize organized criminal groups operating from Mexico; DEA strategic objective 3—direct enforcement efforts at the escalating threat posed by heroin; DEA strategic objective 4—address the dual threats presented by methamphetamine and resurgence in marijuana trafficking; DEA strategic objective 5—assist local law enforcement by deploying METs into communities where drug trafficking and related violent crime are rampant; DEA strategic objective 6—prevent the diversion of controlled substances and control the distribution of chemicals used to manufacture illicit drugs; DEA strategic objective 7—enhance intelligence programs to facilitate information sharing and develop new methods to structure and define drug trafficking organizations; DEA strategic objective 8—support interdiction efforts to target drug transshipments destined for the United States; and DEA strategic objective 9—-seize and forfeit assets and proceeds derived from drug trafficking. DEA did not align each of its objectives with any particular goals. Because DEA is the nation’s lead drug enforcement agency, its strategic goals and objectives and its programs should be consistent with the National Strategy. Table 3.2 shows DEA’s strategic goals compared to National Strategy goals for drug supply reduction. DEA’s first strategic goal aimed at dismantling and disrupting drug trafficking organizations is consistent with National Strategy goal 4, which calls for shielding America’s air, land, and sea frontiers from the drug threat, as well as with goal 5, which calls for breaking foreign and domestic sources of drug supply. DEA’s goal for dismantling and disrupting trafficking organizations applies to all drug trafficking organizations regardless of where they operate—in the United States, in drug transshipment areas, at U.S. border areas, and in foreign countries. Similarly, DEA’s strategic goal 2, which calls for providing federal investigative resources to local communities for reducing drug-related crime and violence, is consistent with National Strategy goal 2, which also calls for reducing drug-related crime and violence. DEA’s strategic goal 3, which calls for DEA to cooperate and coordinate with federal, state, local, and foreign law enforcement and intelligence counterparts, is consistent with National Strategy goals 2, 4, and 5. By coordinating and cooperating with other law enforcement and intelligence groups, DEA’s coordinated efforts reach out in support of the National Strategy’s three supply reduction goals. As with its goals, DEA’s strategic objectives are also consistent with the objectives of the National Strategy. For example, as can be seen in table 3.3, various DEA strategic objectives for dismantling and disrupting domestic and international drug trafficking organizations, providing assistance to local communities to reduce drug-related violence, and supporting drug interdiction efforts align with National Strategy objectives. In addition, as with its strategic goals and objectives, DEA’s programs and initiatives in the 1990s as discussed in chapter 2 are also consistent with the goals of the National Strategy. During the1990s, DEA has enhanced or changed important aspects of its operations, that is, its strategies, programs, initiatives, and approaches. DEA gave a higher priority than in the past to and increased resources for working with and assisting state and local law enforcement agencies through its State and Local Task Force Program and started the MET Program to help combat drug–related violent crime in local communities. DEA established the Kingpin Strategy, which evolved into SOD, placing greater emphasis on intercepting communications between top-level drug traffickers and their subordinates (i.e., attacking the “command and control” communications of major drug trafficking organizations) to dismantle their entire trafficking operations. DEA started participating in two interagency programs—Linear Approach and Linkage Approach—to target and investigate major drug trafficking organizations in Latin America and Asia. DEA helped establish, train, and fund special foreign police units to combat drug trafficking in certain key foreign counties, primarily in Latin America. These and other drug law enforcement programs and initiatives discussed in detail in chapter 2 are consistent with National Strategy goals 2, 4, and 5 previously discussed in this chapter and described in table 3.1. For example, DEA’s MET Program, started in 1995, is consistent with National Strategy goal 2, which calls for increasing the safety of American citizens by substantially reducing drug-related crime and violence. The 1999 National Strategy established performance targets calling for specific increases in the percentage of drug trafficking organizations disrupted and dismantled. These targets are measurable and can be used to assess the collective performance of drug control agencies responsible for achieving them. However, although DEA is the lead drug enforcement agency, it has not established similar measurable performance targets for its own operations. To measure the effectiveness and performance of the National Strategy, ONDCP established 5 and 10 year performance targets and performance measures.These performance targets and measures are intended, in part, to enable policymakers, program managers, and the public to determine efforts that are contributing to the strategic goals and objectives of the National Strategy. To track and measure progress in achieving the strategic goals and strategic objectives of the National Strategy, ONDCP issued its Performance Measures of Effectiveness (PME) system in February 1998.This system is a 10-year plan that identifies performance targets and related performance measures as the means for assessing the progress of the National Strategy in achieving its strategic goals and objectives. The PME system contains 97 performance targets. Although originally undertaken as a policy decision to bring more accountability to drug policy, the PME system is now grounded in legislation. The Office of National Drug Control Policy Reauthorization Act of 1998 requires ONDCP to submit an annual report to Congress on the PME system. ONDCP issued its first annual status report in February 1999. Beginning in 1996, interagency working groups involving federal agencies, including DEA, along with outside experts developed the PME performance targets through a consensual process. The performance targets were incorporated into the PME plan issued in February 1998. After the initial PME plan was issued, interagency working groups, including those involving DEA, continued developing, refining, and implementing the PME system during 1998. The working groups, among other things, focused on developing specific action plans identifying the responsibilities of each agency in working towards the PME performance targets and identifying annual targets that correspond to the achievement of the 5 and 10 year performance targets. For each performance target, the PME system identifies a “reporting agency” (or “agencies” when there is shared responsibility) and “supporting agencies.” A reporting agency(s) is required to report to ONDCP on progress in achieving the performance target. However, the reporting agency is not necessarily the only agency responsible for achieving the target. Supporting agencies are to assist with data collection and assessment or have programs that contribute to achieving the target. The initial 1998 PME system document identified performance targets relating to disrupting and dismantling drug trafficking organizations and arresting drug traffickers. These performance targets called for specific percentage increases in the number of domestic and international drug trafficking organizations disrupted or dismantled and the number of drug traffickers arrested by 2002 and 2007. DEA was designated as the sole reporting agency for performance targets aimed at decreasing the capabilities of domestic and international drug trafficking organizations and traffickers. DEA shared reporting-agency responsibilities with HIDTAs for the performance target aimed at drug trafficking organizations identified in HIDTA threat assessments. As a result of the PME implementation process in 1998, changes were made to performance targets for drug trafficking organizations and drug traffickers. These changes were reported in the 1999 PME report. The performance target for domestic drug traffickers was deleted. The target for international drug traffickers was combined with the target for international drug trafficking organizations to focus on one manageable target. In addition, DEA was deleted as a reporting agency for the performance target aimed at drug trafficking organizations identified in HIDTA threat assessments. Tables 3.4, 3.5, and 3.6 show the performance targets and related performance measures for disrupting and dismantling drug trafficking organizations, along with the current status of achieving the targets as reported by ONDCP in its 1999 PME report. As can be seen in the tables, the National Strategy performance targets and measures are quantifiable and outcome oriented and can be readily used to assess performance following collection of proposed baseline data on lists of drug trafficking organizations. DEA, with assistance from supporting agencies such as the FBI, is to report progress by the drug law enforcement community in dismantling or disrupting a percentage of identified domestic and international drug trafficking organizations. However, ONDCP, in reporting on the current status of the performance targets for which DEA is the reporting agency, noted that data on drug trafficking organizations needed to assess performance had not been identified nor had annual performance targets been established. Further, according to ONDCP and DEA, neither the domestic nor international designated target lists referred to in tables 3.4 and 3.5 have been developed. According to ONDCP officials, DEA and various supporting agencies are working toward developing lists of domestic and foreign drug trafficking organizations for use in pursuing the performance targets. ONDCP officials said that the time frames for reporting on performance targets for dismantling and disrupting drug trafficking organizations and their leaders are (1) 1999 for defining organizations and developing trafficker lists, (2) 2000 for collecting data, and (3) 2001 for reporting on data and gauging performance. According to ONDCP’s 1999 report, its PME system tracks the performance of the numerous programs that support each strategy goal and objective. The accomplishment of National Strategy goals and objectives generally require the contributions of many agency programs. The PME system does not track an individual agency’s performance nor is it designed to do so. According to ONDCP, agencies such as DEA are required to track their own performance through their Results Act plans, and these plans should be consistent with the National Strategy and the PME system. Over the years, DEA has used arrest and seizure data (drugs and assets) along with examples of significant enforcement accomplishments, such as descriptions of successful operations, to demonstrate its effectiveness in carrying out its enforcement programs and initiatives. However, these data are not useful indicators for reporting on results because arrest and seizure data relate to outputs (activities) and not to outcomes (results). These arrest and seizure data do not present a picture of overall performance or of DEA’s level of success in achieving its goals. Further, the use of arrest data as a performance indicator can be misleading without information on the significance of the arrests and the extent to which they lead to prosecutions and convictions. In addition, using arrest data as a performance target can lead to undesirable consequences when law enforcement agencies place undue emphasis on increasing the numbers of arrests at the expense of developing quality investigations. More recently, with passage of the Results Act, DEA has been attempting to go beyond reporting outputs to reporting outcomes. In response to the Results Act, DEA prepared annual performance plans for fiscal years 1999 and 2000 that contain information on its strategic goals and objectives and its performance indicators. In its fiscal year 1999 performance plan issued in January 1998, DEA described its strategic goals, strategies for achieving those goals, annual goals, and performance indicators. DEA associated these goals, strategies, and performance indicators with its various programs and initiatives. “… disrupt/dismantle the leadership, command, control, and infrastructure of drug syndicates, gangs, and traffickers, of licit and illicit drugs that threaten Americans and American interests.” “… implement drug law enforcement strategies that target and attack the leadership and infrastructure of major drug syndicates, gangs, and traffickers of licit and illicit drugs that threaten America.” “DEA will continue its investigative efforts, including the application of forfeiture laws, especially along the Southwest border. This will produce an increase in the number of arrests, removals, and seizures. The primary outcome will be a reduction in the trafficking capability of drug organizations, particularly those associated with the Mexican Federation, that use the southwest border in transshipment.” To assess the extent to which it was accomplishing its strategic and annual goals to reduce trafficking capability, DEA’s plan listed performance indicators that were not results oriented. DEA planned to measure performance, using data on total numbers of arrests and total number of major criminal enterprises and other drug trafficking organizations disrupted or dismantled. However, DEA did not identify performance targets for its goals, such as the proportion of identified drug trafficking organizations to be disrupted and dismantled, against which its performance could be assessed. DEA’s fiscal year 1999 plan had no annual, mid- or long-range performance targets for disrupting and dismantling drug trafficking organizations. DEA noted in its performance plan for fiscal year 1999 that data on the number of drug trafficking organizations had not been previously collected and reported and would be available by March 1, 1998. But it never reported these data in its subsequent performance plan for fiscal year 2000. DEA also pointed out that although several of its performance indicators were in the developmental stage, their establishment would help to provide the framework for future evaluations of DEA’s efforts. DEA organized its fiscal year 2000 performance plan—issued in February 1999—differently from its 1999 plan to align it with its three major budget activities—enforcement, investigative support, and program direction. DEA organized its fiscal year 2000 plan around what it identified as its three core business systems: (1) enforcement of federal laws and investigations, (2) investigative support, and (3) program direction. Along with information on its 3 core business systems and 15 subsystems, the plan, as previously described, listed DEA’s strategic goals and objectives. However, unlike its 1999 performance plan, the fiscal year 2000 plan did not have clearly identifiable annual goals. “Through effective enforcement effort, DEA will disrupt/dismantle the command & control, and infrastructure of drug syndicates, gangs, and traffickers of licit and illicit drugs that threaten Americans and American interests, including providing enforcement assistance to American Communities to fight drug-related crime and violence.” Related to its core business system for enforcement, DEA’s fiscal year 2000 performance plan listed a strategic goal and objectives for disrupting drug trafficking organizations. DEA’s description of its core business system and its strategic goal and objective are similar. However, as with its fiscal year 1999 plan, DEA’s fiscal year 2000 plan does not include annual, mid- or long-range measurable performance targets for disrupting or dismantling drug trafficking organizations. Although DEA does not have a performance target for dismantling international drug trafficking organizations, it does have a performance indicator that may lead to a performance target that is consistent with the target in the National Strategy. DEA’s fiscal year 2000 performance plan contains a performance indicator specifying that DEA plans to use data on the number of targeted organizations disrupted or dismantled as a result of DEA involvement in foreign investigations compared to the total number of targeted organizations as a basis for measuring performance. The plan notes, however, that DEA is currently not collecting data for this performance indicator but expects to do so during fiscal year 1999. For domestic drug trafficking organizations, DEA’s plan does not include a performance indicator that is quantifiable and results oriented similar to the one it specified for international drug organizations. DEA has no performance indicator specifying that it will measure performance on the basis of the number of targeted domestic organizations disrupted and dismantled compared to the total number of targeted organizations. Further, DEA’s fiscal year 2000 performance plan does not indicate that DEA plans to collect data on domestic drug trafficking organizations for development of a performance target that is consistent with the target in the National Strategy. It is unclear whether DEA plans to develop a performance target for its program aimed at disrupting and dismantling domestic drug trafficking that would be consistent with the performance target and the national effort called for in the National Strategy. DEA’s fiscal year 2000 performance plan indicates that DEA will be reporting on prior year arrests resulting in prosecutions and convictions as a performance indicator for measuring its enforcement efforts. As required by DOJ policy, to avoid perceptions of “bounty hunting” DEA and other DOJ component organizations cannot specify performance targets for arrests. However, DOJ’s policy would not preclude DEA from developing a performance target and performance indicator for domestic drug trafficking organizations consistent with those in the National Strategy. The National Strategy performance targets do not involve projecting increased numbers of arrests; rather, they call for increasing the percentage of targeted drug trafficking organizations dismantled or disrupted. In addition to the lack of result-oriented performance indicators and performance targets for its programs aimed at domestic drug trafficking organizations, DEA’s plan lacks performance targets and related performance indicators for other mission-critical programs. For example, DEA’s core business system for enforcement and one of its strategic goals call for assistance to local communities to reduce drug-related crime and violence. However, DEA has not established a performance target and performance indicator that could be used to measure the results of its assistance to local communities. In this regard, DEA has a strategic objective calling for assistance to local law enforcement by deploying METs, discussed in chapter 2, into communities where drug trafficking and related crime are rampant. However, DEA has not identified a performance target and performance indicator to measure the results of its MET Program even though, as discussed in chapter 2, resources dedicated to METs and other forms of assistance to local law enforcement have continued to grow in the 1990s. Thus, it is unclear how DEA will measure the results of its strategic objective calling for MET deployments. In the program accomplishment and highlight section of its performance plan for fiscal year 2000, DEA states that “he effect of METs in reducing violent crime has been clearly established in 1998.” The plan further points out that a comparison of violent crime statistics before and after MET deployments indicated reductions in violent crime in areas where MET deployments occurred. Using this type of results-oriented data, DEA should be able to specify a performance indicator that, when tied to a measurable performance target, could be used to assess the results of the MET Program in terms of actual versus expected performance. In August 1998, DEA’s Chief for Executive Policy and Strategic Planning, told us that DEA had not yet identified the performance goals and indicators it will ultimately use. She told us that at the direction of the Administrator, DEA was planning to bring its field representatives together with headquarters officials to obtain their views and input on DEA’s goals, strategies, and performance indicators. In April 1999, she told us that the meeting with field representatives, which was initially planned for the fall of 1998 but was delayed pending hiring of a contractor, was expected to be held by the summer of 1999. However, with the recent resignation of DEA’s Administrator, these plans were placed on hold and not addressed in DEA comments on a draft of this report. In addition, in April 1999, DEA’s Chief for Executive Policy and Strategic Planning told us that DEA would have to work with DOJ in developing performance goals and indicators. In this regard, she said that DEA would be following the direction provided by DOJ in its departmentwide drug strategy. She also pointed out that ONDCP had not yet established a baseline (agreed-upon target list) for its National Strategy performance targets aimed at disrupting and dismantling drug trafficking organizations. In commenting on a draft of this report in June 1999, DEA pointed out that DEA (1) has developed preliminary performance targets that were included in DEA’s fiscal year 2001 budget submission to DOJ; (2) had established a working group consisting of representatives from its operations, strategic planning and executive policy, and resource management staffs to further refine its performance targets; and (3) is working with other DOJ components to develop performance targets and measurements that will be consistent with the targets in the National Strategy. Measuring and evaluating the impact of drug law enforcement efforts is difficult for several reasons. First, antidrug efforts are often conducted by many agencies and are mutually supportive. It is difficult to isolate the contributions of a single agency or program, such as DEA’s domestic enforcement program aimed at disrupting and dismantling major drug traffickers, from activities of other law enforcement agencies. Other factors that DEA has little control over, such as drug demand reduction efforts, may also affect drug trafficking operations. Second, the clandestine nature of drug production, trafficking, and use limits the quality and quantity of data that can be collected to measure program performance. History has shown that drug trafficking organizations continually change their methods, patterns, and operations as law enforcement concentrates its resources and efforts on a specific region or method. Drug law enforcement agencies must continuously deal with unknown and imprecise data, such as the number of drug trafficking organizations and the amount of illegal drugs being trafficked. Third, some of the data that are currently collected are not very useful in assessing the performance of individual programs and agency efforts. As previously mentioned, data collected on arrests, drug seizures, and assets forfeited generally measure enforcement outputs but not outcomes. Further, data collected on drug availability and consumption are generally not designed to measure the performance of a single program or agency, and such data are influenced by other factors in addition to enforcement efforts. DEA’s strategic goals and objectives as well as its programs and initiatives are consistent with the National Drug Control Strategy. However, DEA has not developed performance targets for its programs and initiatives aimed at disrupting or dismantling drug trafficking organizations and arresting their leaders. We recognize the complexity and difficulty of measuring outcomes and impact for drug law enforcement agencies operating in a clandestine drug trafficking environment. Nevertheless, without measurable performance targets and related performance indicators for its mission-critical programs, it is difficult for program managers, policymakers, and others to quantitatively assess DEA’s overall effectiveness and the extent to which DEA’s programs are contributing to its strategic goals and objectives and those of the National Strategy. ONDCP has set specific measurable performance targets in the National Strategy for achieving strategic goals that it shares with DEA. DEA has worked with ONDCP and other federal drug control agencies to develop performance targets for the National Strategy and for measuring the progress of federal efforts toward those targets. However, although DEA is the lead federal drug enforcement agency and reporting agency for several National Strategy performance targets, it has not established similar measurable performance targets for its own operations. In this regard, DEA has not established similar measurable performance targets for its operations in either its fiscal years 1999 or 2000 annual performance plans although, as discussed below, it stated in its comments on a draft of this report that it has developed preliminary targets for inclusion in its fiscal year 2001 performance plan. Measurable DEA performance targets, once finalized, coupled with continued refinement of the National Strategy performance targets on the basis of DEA input and leadership, along with DOJ guidance, should bring DEA and ONDCP closer in pursuing their shared goals and objectives for disrupting and dismantling drug trafficking organizations. Such performance targets also should provide DEA with a better basis for measuring its own progress in achieving its mission and for making decisions regarding its resource needs and priorities as discussed in the next chapter. We recommend that the Attorney General direct the DEA Administrator to work closely with DOJ and ONDCP to develop measurable DEA performance targets for disrupting and dismantling drug trafficking organizations consistent with the performance targets in the National Drug Control Strategy. In its written comments on a draft of this report, although not directly agreeing with our recommendation, DEA agreed with our principal finding regarding measurable performance targets. However, it disagreed with our draft conclusion relating to the finding, pointed out actions it was taking relating to our recommendation, and requested guidance on bringing closure to the recommendation. DEA agreed with our principal finding that it had not included measurable performance targets for disrupting or dismantling drug trafficking organizations in its fiscal years 1999 and 2000 performance plans. However, it disagreed with our draft conclusion that “In the absence of such targets, little can said about DEA’s effectiveness in achieving its strategic goals.” DEA indicated that this statement and supporting information in this chapter gave the impression that DEA had not attempted to develop performance targets. DEA said that it has developed “preliminary performance targets” that have been included in its fiscal year 2001 budget submission to DOJ and that are to be refined for inclusion in subsequent budgets. To further refine its performance targets, DEA said that it had established a working group consisting of representatives from its operations, strategic planning and executive policy, and resource management staffs. DEA also noted that it is working with other DOJ components to develop performance targets and measurements that will be consistent with the targets in the National Drug Control Strategy. To recognize these actions, we added them to the pertinent section of this chapter as an update to information previously provided by DEA. We also modified our draft conclusion that little can be said about DEA’s effectiveness without performance targets to clarify our intent that it is difficult to quantitatively assess DEA’s overall effectiveness without such targets. DEA’s stated actions are consistent with the intent of our recommendation. However, because DEA performance targets are preliminary and under review within the executive branch, they are subject to change until February 2000 when DEA issues its annual budget submission and performance plan, as part of DOJ’s submission, to Congress. Further, DEA indicated that it cannot finalize its performance targets and measures until a designated targeted list of international drug trafficking organizations, as called for in the National Strategy, is completed. Therefore, we are retaining our recommendation until DEA’s preliminary performance targets are finalized for inclusion in its annual performance plan and can be compared for consistency with those in the National Strategy. DEA and ONDCP also provided technical comments, which we incorporated in this chapter where appropriate. In order to carry out its mission and operations during the 1990s, including the programs and initiatives discussed in chapter 2 and the strategies discussed in chapter 3, DEA received funds to staff its operations through several sources. These included its annual appropriations salaries and expenses budget; DOJ’s Violent Crime Reduction Program (VCRP); and other reimbursable programs, such as OCDETF. This chapter focuses on the process used to determine and allocate additional DEA positions provided through its salaries and expenses budget. Specifically, it discusses the process used in fiscal year 1998, which was, according to DEA and DOJ officials, generally typical of the approach DEA has used in other years. The process used to determine the need for and to allocate additional DEA staff is linked to the federal budget formulation and execution process and reflects federal laws and budget guidelines promulgated by OMB. In fiscal year 1998, the DEA process considered field input, changes in drug abuse and drug trafficking patterns, and the Administrator’s priorities to prepare its staffing enhancement estimates for its budget submission to DOJ. DEA’s submission to DOJ estimated the need for 989 new total positions, including 399 special agent positions. As a result of reviews by DOJ, OMB, and ONDCP and consideration of the resources provided in DEA’s fiscal year 1997 appropriation,the President’s fiscal year 1998 budget, which was submitted to Congress in February 1997, requested a total of 345 new positions for DEA, including 168 special agent positions. Congress provided 531 additional positions, of which 240 were special agent positions, with guidance as to how the positions were to be allocated. DEA senior management then determined the allocation of additional staff, considering congressional guidance and such other factors as field office prior requests. The process used to determine the staffing resources necessary to carry out DEA’s mission is generally typical of the federal budget processes and procedures that federal agencies are expected to follow. These processes and procedures are established in federal law and budget guidelines promulgated by OMB. Each legislative session, the president is required by law to submit a budget to Congress. The Budget and Accounting Act of 1921, as amended, provides the legal basis for the president’s budget, prescribes much of its content, and defines the roles of the president and the agencies in the process. During budget formulation, the president establishes general budget and fiscal policy guidelines. Policy guidance is given to agencies for the upcoming budget year and later to provide initial guidelines for preparation of agency budget requests. OMB Circular A-11 provides instructions on the preparation of agency submissions required for OMB and presidential review of budget estimates and for formulation of the president’s budget. The budget formulation process begins at the lowest organizational levels of a federal agency and moves to the higher levels. A consolidated agencywide budget is prepared for submission to OMB. This approach is typical of federal agencies, although some have elaborate planning processes that allow for objectives established at the top to guide budget preparation. OMB reviews agency requests according to a process that includes several stages—(1) staff review, (2) director’s review, (3) passback, (4) appeals, and (5) final decisions. The final budget is prepared and printed by OMB for submission to Congress no later than the first Monday in February of each year, as required by law. According to DEA and DOJ officials, the DEA fiscal year 1998 staffing needs determination process began in the summer of 1995 and was typical of DEA’s staffing determination process. Prior to the commencement of the official budget formulation process, DEA domestic and foreign field offices provided estimates of their staffing needs to DEA headquarters program staff. Program and budget staff reviewed and considered these estimates in the development of DEA’s budget submission with staffing estimates, which were sent to DOJ in June 1996. In accordance with the federal budget process, DOJ and OMB reviewed DEA’s budget submission and staffing estimates, which resulted in some changes in the estimates. ONDCP reviewed DOJ’s budget submission to OMB as part of the national drug budget certification process, which is distinct from, but occurs simultaneously with, the budget formulation process and may also affect DEA’s staffing estimates. Figure 4.1 depicts DEA’s fiscal year 1998 staffing determination process. Nov. Dec. Jan. Feb. Mar. Apr. DEA: Aug.-Oct. Domestic and foreign field offices prepared and submitted staffing estimates to DEA headquarters. DEA: May Administrator approved the budget submission/ staffing estimates. DEA: Dec.-Apr. DEA headquarters program and budget staff reviewed field office submissions and began development of staffing estimates. DEA: Apr.-May In accordance with the budget formulation process, DEA headquarters program and budget staff prepared and revised, on the basis of the Administrator's comments, DEA's FY 98 initiative-based submission and staffing estimates. Budget submission/ staffing estimates were sent to DOJ. 6 DOJ: June-Aug. DOJ budget staff received, reviewed, and analyzed DEA submission/staffing estimates and sent them back to DEA. OMB: Sept.-Dec. OMB budget staff reviewed and analyzed DEA's budget submission/ staffing estimates. DOJ: Dec. DOJ budget staff reviewed OMB's passback and sent DOJ's interpretation of the passback to DEA. OMB: Dec.-Jan. OMB prepared final budget submission with assistance from and review by DEA and DOJ. OMB staff sent their results to the OMB Director. 11 DEA: Dec. 15 OMB: Feb. OMB Director reviewed examiners' recommendations. DEA prepared its appeal to the OMB passback and sent it to DOJ. OMB transmitted budget submission to President for submission to Congress on Feb. 6th. OMB Director generally discusses the overall federal budget with the President at this time. 12 DOJ: Dec. OMB prepared DEA/DOJ passback. 13 OMB: Dec. OMB reviewed DEA/DOJ's passback appeal, including staffing estimates. DOJ and OMB agreed on an overall spending level for the department, and DOJ distributed the appeal amounts. DEA’s fiscal year 1998 staffing process began in the summer of 1995. Each DEA domestic field division submitted a field management plan (FMP), and foreign offices followed a less structured and more informal staffing request process. In an August 1995 memorandum to its domestic field divisions, DEA headquarters provided direction and guidelines for preparation of the fiscal year 1996-1997 FMPs. DEA requested detailed, specific, and realistic enhancements for fiscal year 1998 for use in the formulation of DEA’s fiscal year 1998 budget/staffing submission. According to DEA officials, an FMP is supposed to be based on the Administrator’s vision statement, which is provided to the field divisions; the local SAC’s vision statement, which has previously been reviewed and approved by DEA headquarters; and the drug threat that the division expects to confront. The 1995 memorandum directed each field division to indicate the resources it would need. Through the FMPs, which were due in October 1995, DEA’s domestic field divisions requested a total of 591 positions, including 369 special agent positions. According to DEA officials, recommendations and requests for DEA foreign office staffing enhancements and new foreign offices for fiscal year 1998 came from a variety of sources, including DEA country attachés (CA)and the foreign country through the U.S. Ambassador. Each of the four foreign sections (Central America and the Caribbean, Europe and the Middle East, Far East, and South America) within the Office of International Operations at DEA headquarters was tasked with identifying the issues, including staffing needs, within specific countries. In March 1996, according to a DEA official, the International Operations staff, including the Chief, Deputy Chief, and section heads of International Operations, met to discuss recommendations from the four sections. The official said that to assess and justify staffing requests for their respective regions, DEA foreign section staff used regional and individual DEA country plans, as well as foreign situation and quarterly trends in trafficking reports, which provided context and background. Foreign operational needs were discussed in terms of DEA’s goals and objectives and prioritized. DEA officials told us that International Operations communicated the results of this meeting (as a discussion document) to DEA’s budget section. In accordance with the federal budget formulation process, DEA budget staff prepared the agency’s spring budget submission to DOJ, including staffing estimates. After review and approval by its executive staff and the Administrator, DEA sent DOJ its budget submission, which included 6 initiatives with identified additional staffing needs of 989 total positions; 399 were special agent positions, and 590 were support positions. For preparing DEA’s budget submission to DOJ, DOJ budget officials said DOJ provides instructions and usually guidance; and, according to DEA officials, the DEA Administrator also usually provides guidance. Although the documents sent to the agencies varied each year, DOJ provides written planning guidance and instructions in April, about 17 months prior to the beginning of the budget year. However, officials said that informal guidance was usually available earlier. The DEA Administrator may also issue a budget call memorandum to all program managers listing his priorities. According to DEA and DOJ budget officials, for its fiscal year 1998 guidance, DOJ used an amended version of its fiscal year 1997 guidance. In addition, DEA budget officials said that the DEA Administrator sent out a budget call memorandum in February 1996 indicating his priorities. However, DEA budget officials said that they actually began to develop DEA’s fiscal year 1998 budget submission/staffing estimates in December 1995, prior to the guidance, and continued to work through May 1996. As part of this process, officials said that DEA budget staff considered the needs of field and headquarters offices, analyzed information on emerging drug trends, and held discussions with DEA program managers. Budget staff said that after canvassing the program managers, they presented the proposed budget submission and staffing request to the Administrator in March 1996. According to these staff, on the basis of the Administrator’s comments, they then prepared DEA’s final fiscal year 1998 budget submission/staffing request to DOJ, which DEA’s Executive Staff and the Administrator reviewed and approved in May. In June 1996, DEA sent its fiscal year 1998 budget request with estimates of additional staffing needs to DOJ. In its submission, DEA estimated a need for 989 additional positions, including 399 special agent positions and 590 support positions (e.g., diversion investigators, chemists, intelligence analysts, and professional and clerical staff). DEA identified, prioritized, and requested funding, including staffing enhancements, for six specific initiatives. Countering violent crime: This included staffing estimates (193 total/98 special agents) for the MET Program and for converting 8 provisional state and local task forces to program-funded status. Methamphetamine strategy: This initiative included estimated staffing enhancements (279 total/127 special agents), including positions to convert 7 provisional state and local task forces to program-funded status, to fund a comprehensive approach for attacking methamphetamine abuse. Southwest Border project: This included estimated staffing enhancements (212 total/96 special agents) to continue DOJ’s interagency strategy against drug trafficking on the Southwest Border. Domestic heroin enforcement: This initiative included estimated staffing enhancements (104 total/53 special agents) to continue implementation of DEA’s 5-year heroin strategy. International crime: This included estimated staffing enhancements (76 total/25 special agents) to (1) open DEA country offices in Tashkent, Uzbekistan; Vientiane, Laos; Abu Dhabi, United Arab Emirates; Lisbon, Portugal; and Managua, Nicaragua; (2) provide additional support to DEA offices in Mexico City, Panama City, New Delhi, Bangkok, and Hong Kong; and (3) establish an International Chemical Control Center in order to address the growing international aspects of drug production, transshipment, and trafficking. Investigative shortfalls: This initiative included estimates of resources and staffing enhancements (125 total) needed to replace lost asset forfeiture revenues, provide support staff for domestic field offices, and provide additional basic and refresher training for special agents and DEA support staff. The submission included justifications for each initiative and reflected DEA’s internal budget/staffing determination process. For example, on the basis of changing trends and input from the field, DEA’s fiscal year 1998 budget submission proposed a methamphetamine initiative, including domestic and international staff enhancements, to fund a comprehensive approach for attacking methamphetamine abuse. To justify its fiscal year 1998 estimates, DEA provided (1) DAWN data that indicated a steady increase in the number of methamphetamine-related emergency room episodes and deaths and (2) statistics indicating an increased use of and trafficking in methamphetamine and the proliferation of clandestine drug laboratories in both traditional and new locations. In accordance with the budget formulation process, DEA’s fiscal year 1998 budget submission was reviewed by DOJ Justice Management Division (JMD) budget staff and the Attorney General between June and August, 1996. According to DOJ budget staff, as in other years, to assess DEA’s fiscal year 1998 enhancements and the corresponding justifications, the budget staff considered the (1) overall illegal drug situation at the time, including drug trends and patterns; (2) link between the specific request and ONDCP, DOJ, and DEA goals, strategies, and indicators; (3) facts and arguments used by DEA to justify the request; and (4) level of resources requested relative to the justified need, including prior year appropriations. As a result of their analysis of DEA’s fiscal year 1998 budget submission, DOJ budget staff estimated that DEA would need 771 additional positions, including 311 special agents, to support the 6 initiatives. This was 218 fewer total positions, including 88 special agent positions, than DEA estimated. Over half of the difference between DEA’s and DOJ’s estimates can be accounted for by DOJ’s not having included positions to convert certain state and local task forces to permanent funding status under the violent crime and methamphetamine initiatives. DOJ argued that (1) local entities must continue to contribute to these efforts to maintain the integrity of the intergovernmental relationship; (2) additional resources were available to these entities through other DOJ state and local grant programs; and (3) in the case of the methamphetamine initiative, further assessment was needed before conversions were made. DOJ budget staff recommended fewer positions than DEA for five of the DEA initiatives but concurred with DEA’s staffing estimates for the investigative shortfall initiative. These recommended changes in staffing estimates, including the justifications provided, are summarized below. Violent crime: In addition to not including positions to convert state and local task forces to permanent status, as previously discussed, budget staff recommended fewer additional agent positions for the MET Program. DOJ staff concluded that four new MET teams for deployment to areas with higher numbers of outstanding requests were sufficient to keep the waiting time for a MET deployment to acceptable limits. Methamphetamine: Most of the difference between DEA’s and DOJ’s staffing estimates for this initiative can be attributed to DOJ’s not including positions for state and local task force conversion. DOJ also did not recommend additional chemists, concluding that DEA had sufficient chemist resources; or an additional agent for demand reduction to increase public awareness of methamphetamine, given DEA’s other critical needs. Budget staff recommended 2 DEA clandestine lab regional training teams to teach 26 classes annually, rather than 4 teams to teach 40 classes annually. Southwest Border: DOJ budget staff did not recommend 5 additional chemists and 14 additional support staff, which were included in DEA’s submission. DOJ concluded that DEA had sufficient resources to meet these needs. Domestic heroin: Asserting that DEA had sufficient chemists to meet its desired staffing ratio, DOJ budget staff did not recommend the five chemists and two clerical support positions included in DEA’s estimates for this initiative. International: DOJ budget staff recommended 22 fewer total positions, including 6 fewer special agent positions, than DEA estimated for this initiative. More than half of these 22 positions (2 chemists, 4 foreign diversion investigators, and 6 support staff) were to establish an International Chemical Control Center. DOJ argued that DEA could use chemists from other places to meet these needs and use diversion investigators from key locations in other parts of the world to provide intelligence to the Center. DOJ also did not recommend opening new DEA offices in Abu Dhabi, United Arab Emirates, or Lisbon, Portugal, contending that DEA lacked “substantive rationale” for offices in these locations. DOJ’s estimates also included no staffing enhancements for Bangkok, Thailand, asserting that DEA had sufficient staffing resources to assist Thai police in collecting intelligence about the emerging methamphetamine problem and no additional special agent for Panama, concluding that DEA had not provided “substantive reasons” for that agent. The DOJ budget staff review was followed on August 2, 1996, by the Attorney General’s hearingon DEA’s fiscal year 1998 budget submission. Three working days before the hearing, DOJ budget officials provided their analysis to DEA. According to DOJ budget officials, during the hearing DEA had the opportunity to appeal DOJ’s proposed changes in DEA’s submission and to provide additional information to justify its budget initiatives and enhancements before the Attorney General’s final decision. On August 12, 1996, the Administrator submitted an appeal to the Attorney General in which he requested reconsideration of some of the DOJ budget staff’s recommended changes. The appeal asserted DEA’s need for staff positions to convert certain state and local task forces, associated with its violent crime reduction and methamphetamine efforts, from provisional to program-funded status. It also addressed DEA’s need for resources for its clandestine laboratory cleanup efforts; items previously funded partially by asset forfeiture funds, including awards to informants and marijuana eradication efforts; and in-service training. Nevertheless, for fiscal year 1998, the estimates for additional staffing for DEA included in DOJ’s OMB submission were the same as those recommended by DOJ budget staff and previously discussed. Table 4.1 shows the differences between DEA’s estimates for additional staffing and those proposed to OMB by DOJ. DEA’s fiscal year 1998 budget submission was sent to OMB for review in September 1996 as part of DOJ’s budget request. According to OMB officials, an OMB budget examiner initially reviewed the DOJ budget submission, and the results were presented to and reviewed by the OMB policy officials. Generally, a complete set of budget proposals is presented to the president by early December for his approval. Subsequently, OMB staff prepares the agency passbacks. An OMB official described OMB’s approach to DOJ’s fiscal year 1998 budget submission as “flexible.” That is, as in other years, OMB made suggestions regarding specific DOJ activities, providing DOJ with an overall dollar level and specifying minimum funding for certain funding floors. OMB officials said that OMB did not make account-level recommendations, leaving those decisions to the Attorney General to ensure that the budget reflected DOJ’s priorities. By early December 1996, OMB sent DOJ’s fiscal year 1998 passback in which it recommended an overall DOJ budget lower than DOJ’s submission. For DEA, the passback specified minimum funding for the methamphetamine strategy, the Southwest Border project, and the domestic heroin strategy, but it did not discuss specific staffing estimates or foreign enhancements. Prior to the passback, DEA had received its fiscal year 1997 appropriation, but we were unable to ascertain how it affected the passback. According to DOJ budget officials, DOJ reviewed OMB’s fiscal year 1998 DOJ passback to determine what could be funded according to the Attorney General’s priorities. They said that as a result of OMB’s specifying funding levels for DEA’s methamphetamine, Southwest Border, and heroin initiatives, no funds for the enhancements in other initiatives were available within the DEA budget submission. The DOJ budget officials said that they then sent DOJ’s interpretation—which was based on the Attorney General’s priorities—of the OMB passback to DEA. According to DOJ budget section officials, DEA developed its appeal to the OMB passback and then presented it to OMB, through DOJ, in early December 1996. DEA’s specific staffing-related appeals and outcomes were as follows: Methamphetamine initiative: DEA requested additional resources, including 131 positions. DOJ and OMB agreed to a slight increase in the funded amount to cover 74 positions. Southwest Border initiative: DEA sought 131 additional positions, including 90 special agents. DOJ and OMB agreed to increase the funded amount to cover the additional agents. OMB and DOJ officials reported that the method used to settle appeals varied from year to year. In fiscal year 1998, OMB and DOJ agreed on an overall spending level on appeal and DOJ’s spread of the increase, which provided DEA with funding to cover additional positions for both the methamphetamine and Southwest Border initiatives described above. Concurrent with departmental and OMB reviews of budget submissions, each agency with a drug mission is required by the drug budget certification process to submit a drug control budget to ONDCP. However, in 1996, due to the appointment of a new ONDCP Director and the reformulation and consequent late release of ONDCP’s drug strategy, the national drug budget certification process did not follow ONDCP’s established procedures and schedule. Specifically, ONDCP requested only one fiscal year 1998 budget submission in September 1996, coincident with the OMB deadline. On November 8, 1996, while OMB was reviewing DOJ’s budget submission, DOJ sent its budget request to ONDCP. On November 18, 1996, for consideration before finalizing DOJ’s fiscal year 1998 budget request, the ONDCP Director advised the Attorney General of two DEA program initiatives that did not appear to have been included in DOJ’s submission. The initiatives in question were (1) the continued expansion of vetted law enforcement units in key source and transit countries and (2) a request for additional resources for DEA’s Domestic Cannabis Eradication/Suppression Program. The Director’s letter did not specifically discuss staffing related to the initiatives. Final ONDCP budget certification was withheld until ONDCP reviewed DOJ’s final budget submission. According to DOJ and ONDCP officials, DEA received sufficient resources in its fiscal year 1997 appropriation to address the ONDCP Director’s concerns. Therefore, on the basis of ONDCP’s final review, the Director notified the Attorney General on February 7, 1997—1 day after the President submitted the fiscal year 1998 budget request—that the resources requested by DOJ were certified as adequate to implement the goals and objectives of the National Drug Control Strategy. The President submitted his fiscal year 1998 budget to Congress on February 6, 1997. As a result of the iterative process between DEA/DOJ and OMB over DEA staffing estimates and after consideration of the resources provided in DEA’s fiscal year 1997 appropriation, the President’s budget requested 345 new positions, including 168 special agents, for DEA domestic offices. As shown in table 4.2, the number of total positions requested was approximately one-half the number DOJ initially estimated in its OMB submission. The number of special agents requested was approximately 50 percent of the original DOJ estimates. Some of the differences between the DOJ estimates and the DEA staffing request in the President’s budget submission reflected changes recommended by DOJ or OMB, which were previously discussed. However, other revisions took into account DEA’s fiscal year 1997 appropriation. For example, according to DOJ officials, although DEA’s international crime initiative was not included in the President’s budget submission for fiscal year 1998, DEA was able to staff the Vientiane and Managua offices, included in that initiative, with fiscal year 1997 funds from the Source Country Initiative. In addition, because Congress provided almost twice the funds for the MET Program requested by DEA in fiscal year 1997, the program was fully funded (130 agents were provided) as of that year. Additional funds for the MET Program, which had been included in the fiscal year 1998 violent crime initiative, were no longer necessary. As shown in table 4.3, the conference committee recommended 531 additional positions, of which 240 were special agent positions. On the basis of the recommendations of the House and Senate Appropriations Committees, the conference committee also provided guidance as to how those positions were to be allocated, including a new Caribbean initiative. During the fiscal year 1998 appropriations process, the House Appropriations Committee recommended, and Congress approved as part of the conference committee’s report on DEA’s appropriation, a new Caribbean initiative, which was not included in the President’s budget. According to the House Appropriations Committee report, this initiative was proposed to address the increase in drug trafficking throughout the Caribbean. The initiative provided 60 additional DEA special agents for Puerto Rico, the Northern Caribbean, and south Florida. In addition, the conference committee recommended additional positions, above the President’s request, for the heroin and investigative shortfall initiatives. On the basis of the Senate Appropriations Committee’s recommendation, the Conference Committee’s report included 120 new positions, 24 of which were special agents (twice the number of total and special agent positions in the President’s budget request), to continue efforts to reduce heroin trafficking within the United States. The Conference Committee also identified the need for 85 additional intelligence analysts for the investigative shortfall initiative. The President signed DEA/DOJ’s fiscal year 1998 appropriation into law on November 26, 1996. After receipt of its annual appropriation, DEA is responsible for budget execution and the allocation of new staff. In addition to the guidance provided by Congress, DEA officials said they consider factors, such as recently changing drug trends, to determine that allocation. For fiscal year 1998, according to a DEA official involved in the allocation process that year, DEA’s Executive Policy and Strategic Planning, Operations Division, Financial Management Division, and Office of Resource Management staff prepared a draft allocation for the additional resources provided in DEA’s appropriation. The official indicated that among the factors considered in determining the allocation of additional staff were congressional direction; the number of agents added by Congress, broken out by mission and team; FMPs and any other written requests from the field divisions; DEA and DOJ strategies, initiatives, and priorities, including the Southwest Border and methamphetamine plans; actual hours worked by agents on particular types of cases; and drug trends that had emerged since the original fiscal year 1998 budget submission. The recommendations were sent to the DEA Administrator for review and final approval. DEA allocated 531 new positions, including 240 special agent positions, for the 5 initiatives included in its appropriation. As shown in table 4.4, DEA’s fiscal year 1998 staffing allocation followed Congress’ appropriations guidance. The process used for determining DEA’s staffing needs, as carried out in fiscal year 1998, was systematically linked to its budget formulation process. The DEA process was typical of and consistent with the processes and procedures that federal agencies are expected to follow, according to federal laws and regulations and procedures promulgated by OMB. Moreover, the DEA process considered factors related to DEA’s ability to carry out its mission, including emerging drug trafficking trends, staffing requests from the field, the Administrator’s vision statement, and the SAC’s vision statement from each field office. Once Congress approved DEA’s fiscal year 1998 appropriation, DEA senior management systematically determined the allocation of the additional staff to headquarters and field offices, taking into consideration congressional guidance and such factors as field office requests.
Pursuant to a congressional request, GAO reviewed the strategies and operations of the Drug Enforcement Administration (DEA) in the 1990s, focusing on: (1) what major enforcement strategies, programs, initiatives, and approaches DEA has implemented in the 1990s to carry out its mission, including its efforts to: (a) target and investigate national and international drug traffickers; and (b) help state and local law enforcement agencies combat drug offenders and drug-related violence in their communities; (2) whether DEA's goals and objectives, programs and initiatives, and performance measures are consistent with the National Drug Control Strategy; and (3) how DEA determined its fiscal year 1998 staffing needs and allocated the additional staff. GAO noted that: (1) during the 1990s, DEA has enhanced or changed important aspects of its operations; (2) DEA expanded its domestic enforcement operations to work more with state and local law enforcement agencies and help combat drug-related violent crime in local communities; (3) DEA implemented an investigative approach domestically and internationally, focusing on intercepting the communications of major drug trafficking organizations to target the leaders and dismantle their operations; (4) DEA started participating in two interagency programs to target and investigate major drug trafficking organizations in Latin America and Asia; (5) DEA changed its foreign operations by screening and training special foreign police units to combat drug trafficking in certain key foreign countries; (6) DEA has significant responsibilities for the drug supply reduction portion of the Office of National Drug Control Policy's (ONDCP) National Drug Control Strategy; (7) DEA's strategic goals and objectives, and its enhanced programs and initiatives, in the 1990s have been consistent with the National Drug Control Strategy; (8) however, DEA has not developed measurable performance targets for its programs and initiatives that are consistent with those adopted for the National Strategy; and (9) as a result, it is difficult for DEA, the Department of Justice (DOJ), Congress, and the public to assess how effective DEA has been in achieving its strategic goals and the effect its programs and initiatives in the 1990s have had on reducing the illegal drug supply.
Performance partnerships, as we reported in December 2014, are a type of hybrid approach to grant consolidation in which grant recipients can obtain flexibility to use funds awarded across multiple federal programs in exchange for greater accountability for results. Figure 1 provides an overview of the performance partnership model. Grant consolidation can create opportunities to eliminate federal programs that are overlapping or outdated, or for which the balance between costs and benefits received either do not (or no longer) justify federal spending. According to prior research by the former U.S. Advisory Commission on Intergovernmental Relations, grant consolidations are generally suitable when categorical programs are too small to have much impact or to be worth the cost of administration, or when multiple programs exist in functional areas that have a large number of programs (including health, education, and social services), or where there is fragmentation (including justice, natural resources, and occupational health and safety). Grant consolidations generally take either a block grant or a hybrid approach. A block grant approach is usually broad in scope, intended to increase state and local flexibility, and generally give recipients greater discretion to identify problems or to design programs addressing those problems using funding from the block grant. A hybrid approach, such as a performance partnership, can consolidate a number of narrower categorical programs while retaining strong standards and accountability for discrete but related federal performance goals. Since the 1990s, the federal government has taken steps to explore and establish performance partnerships. For example, the National Performance Review (NPR) identified performance partnerships as a tool for helping federal agencies reform the existing federal grant system, which it stated, among other issues, featured too many funding categories, an emphasis on remediating rather than preventing problems, and no clear focus on measurable outcomes. It noted that performance partnerships could improve federal grant making in situations in which the federal government intends to deliver services at the state or local level, agrees with state or local partners on goals and objectives, and progress toward goals and objectives can be measured. In February 1995, the President’s Budget for fiscal year 1996 proposed 6 performance partnerships spanning 7 federal agencies—the Departments of Agriculture, Education, HHS, HUD, DOL, and Transportation, and EPA— that it stated were aimed at combining funding streams, eliminating overlapping authorities, and turning agencies’ focus to outcomes as the basic measure of success. In 1996, Congress provided EPA authority to create PPGs. More recently, the Performance Partnership Pilots for Disconnected Youth were authorized in January 2014. For our December 2014 report, we determined that EPA’s PPGs and the disconnected youth pilots were the only 2 existing federal performance partnerships, and for this report we confirmed that they remain the only ones authorized to date. According to EPA, the relationship between EPA and the states has long been complex, due in part to the division of roles and responsibilities under federal environmental statutes. Prior to EPA’s creation in 1970, states provided the majority of environmental management controls, such as establishing standards for the amount of pollutants that can be released into air or water and developing public health and natural resources regulations. Subsequently, EPA became a partner with states and localities in environmental management. Most major federal environmental statutes, including the Clean Water Act, permit EPA to allow states under certain circumstances to implement key programs and to enforce their requirements. Several efforts to explore and improve relationships between EPA and the states led to the creation of NEPPS and PPGs. In 1993, EPA and the states convened the Task Force to Enhance State Capacity to generate ideas for improving their partnership. The task force reported that new federal environmental statutes had increased the environmental management responsibilities being borne by states at a time when they were facing declining resources. In addition, the task force reported that EPA and the states faced difficulty in working together on issues of day- to-day program management, which strained their relationship. The task force made a number of recommendations, including that EPA and the states establish a new framework and policy for their relations and a joint process for strategic planning and the integration of both sides’ priorities. In May 1995, EPA and the states established NEPPS to address the task force’s recommendations. As we reported in December 2014, NEPPS is a performance-based system designed to direct scarce public resources toward improving environmental results, allow states greater flexibility to achieve those results, and enhance accountability to the public and taxpayers. A key element of NEPPS, upon its establishment, was EPA’s commitment to give states with strong environmental performance greater flexibility and autonomy in running their environmental programs. percent of the $1.08 billion in environmental program grants EPA awarded in fiscal year 2016. As we reported in February 2008, while most youth successfully transition to adulthood, some become disconnected from school and work, and experience challenges in making this transition. Their disconnection may result from incarceration, aging out of foster care, dropping out of school, or homelessness. Some of these youth are more likely than others to remain low-income and lose jobs during economic downturns, and to engage in criminal activity, antisocial behavior, and teenage parenting. Direct services intended to assist youth in transitioning to adulthood are provided at the local level with the support of federal, state, and local governments, and private funding sources. A range of local entities, such as community-based organizations—which are generally non-profit entities that provide social services—and charter schools, in urban and rural communities nationwide, help to provide such services. Multiple federal agencies play a role in providing funding and assistance to local programs that serve disconnected youth, which can create challenges for local service providers. In February 2008, we reported that the White House Task Force for Disadvantaged Youth identified 12 federal agencies that funded over 300 programs to assist local communities in serving disadvantaged youth in fiscal year 2003. In conducting that work, we also interviewed the directors of 39 local programs serving disconnected youth, and those whose programs received multiple federal grants from multiple federal agencies told us they experienced difficulties in working across varying reporting requirements, funding cycles, and eligibility requirements. The directors also reported experiencing challenges working across varying program goals and sharing information about their clients that participate in multiple federal grants. The Performance Partnership Pilots for Disconnected Youth seek to identify cost-effective strategies for providing services that can address these types of challenges and achieve better results through making better use of budgetary resources. Although implementation of the first round of pilots began in 2015, their genesis dates back to early 2011. Figure 2 identifies key events in the development and implementation of the disconnected youth pilots. According to federal officials involved in the pilots, the concept for the disconnected youth pilots came in response to a February 2011 Presidential memorandum. It directed federal agencies to work with state, local, and tribal governments to identify and develop strategies for eliminating administrative, regulatory, and legislative barriers to achieving results in federally funded programs and increase access to flexibilities needed to produce the same or better outcomes at lower cost. Officials from Education, HHS, and DOL told us that following the memorandum’s issuance, representatives from their agencies met with representatives from state, local, and tribal governments to discuss policy areas in which they thought additional flexibilities could improve outcomes. They identified programs for disconnected youth as an area that would benefit from such flexibilities. In 2012, the Administration and agencies took several steps aimed at better coordinating and integrating programs focused on disconnected youth. In February 2012, the President’s Budget for fiscal year 2013 requested authority for a new performance partnership pilot initiative to test approaches to improve outcomes for disconnected youth. Shortly thereafter, in March 2012, OMB, Education, HHS, HUD, DOJ, and DOL established the Interagency Forum for Disconnected Youth with the goal of improving outcomes for disconnected youth through enhanced interagency and intergovernmental collaboration. In June 2012, Education published a request for information in the Federal Register seeking ideas and information on effective approaches for improving outcomes for disconnected youth. The Interagency Forum on Disconnected Youth used the responses to the request for information to develop initial design considerations for the disconnected youth pilots, according to documentation of the design considerations. The first round of the Performance Partnership Pilots for Disconnected Youth was authorized in the Departments of Labor, Health and Human Services and Education and Related Agencies Appropriations Act for fiscal year 2014. Enacted in January 2014, it authorized federal agencies appropriated funds thereunder to select and implement a round of up to 10 pilots designed to improve outcomes for disconnected youth that may run for 5 fiscal years (through fiscal year 2018, which ends September 30, 2018). The act defines disconnected youth as individuals between the ages of 14 and 24 who are low-income and either homeless, in foster care, involved in the juvenile justice system, unemployed, or not enrolled in or at risk of dropping out of an educational institution. The pilots are to involve 2 or more federal programs administered by 1 or more federal agencies. The act provides agencies authority to use discretionary funding made available in the act and waive statutory, regulatory, or administrative requirements related to the use of that funding. The agencies involved in the first round of pilots—Education, HHS, DOL, CNCS, and IMLS—issued a request for public comment on the pilot application process in July 2014 that sought feedback on information applicants should include in their applications, criteria agencies should use in evaluating applications, and technical assistance for entities preparing applications. Officials from Education told us that the agencies incorporated public comments from the request in a notice inviting applications, which they subsequently issued in November 2014. The federal agencies designated 9 pilot locations as finalists for the first round in September 2015 and publicly announced the locations in October 2015 (see fig. 3). Subsequent appropriations laws authorized 2 additional rounds of disconnected youth pilots and broadened the scope of the effort. Second Round. In December 2014, a second round of up to 10 locations for disconnected youth pilots was authorized, again with a 5- year timeframe for implementation (through fiscal year 2019), and agencies were authorized to use funds made available in the fiscal year 2015 appropriations to participate in previously authorized pilots. Education published a notice of proposed priorities, requirements, definitions, and selection criteria for the second and future rounds of pilots in the Federal Register in October 2015. The notice proposed additional priorities for projects serving specific high- need subpopulations of disconnected youth, changed application requirements to reduce burden on applicants, and asked for comments on how federal agencies could improve future pilot competitions. In April 2016, the agencies participating in the pilots published final priorities, requirements, definitions, and selection criteria. That same month, they published a notice inviting applications for the second round of pilots. The notice established a June 2016 deadline for application submissions, but in July 2016 the agencies published a notice reopening the application process. Agencies took this action to allow applicants additional time to prepare and submit their applications. According to Education officials, in September 2016 federal agencies designated 1 second round applicant as a pilot finalist. The agencies expect to announce the pilot publicly once the agencies and the finalist have signed a performance partnership agreement. Third Round. The third round of disconnected youth pilots, again consisting of up to 10 locations with a 5-year timeframe (through fiscal year 2020) was authorized in December 2015, and agencies were authorized to use funds made available in the fiscal year 2016 appropriations act to participate in previously authorized pilots. DOJ and HUD were authorized to participate in this round of pilots. In addition, this authorization established that new pilots selected for the second round using fiscal year 2015 funds and the subsequent third round must include communities that have recently experienced civil unrest. In August 2016, the agencies published a notice inviting applications for the third round of pilots. The application period closed in October 2016, and in January 2017 the agencies designated 6 applicants as third round pilot finalists. We identified 4 key characteristics that PPGs and the disconnected youth pilots share: 1. documented agreement outlining goals, roles, and responsibilities; 2. flexibility in the use of funds across multiple federal programs; 3. additional flexibilities, such as expanded program participant eligibility or streamlined reporting requirements; and 4. accountability for results. The following sections describe each of the shared key characteristics, providing illustrative examples from selected states with PPGs and selected pilot locations and any benefits or challenges associated with these key characteristics as described by participants. More detailed information about how the key characteristics are exhibited in the 2 initiatives and additional illustrative examples are contained in appendix II (PPGs) and appendix III (disconnected youth pilots). Federal agencies and non-federal grant recipients generally document in an agreement what is entailed by their partnership. The document establishes the various goals the partners seek to achieve through their partnership. It also lays out the roles and responsibilities of each partner. A PPG generally involves an EPA regional office and a state agency, such as a state environmental, health, or agricultural agency. Figure 4 provides the general structure of this partnership. EPA and state agencies define the scope of their partnership in a PPG work plan. For programs authorized under federal environmental statutes, EPA generally is responsible for establishing program policy and guidance and oversight, and states generally are responsible for carrying out day-to-day program operations. Therefore, PPG work plans, like other EPA program grant work plans, identify an agreed-upon set of planned work activities the state agency will undertake and their timeframes for completion, as well as information about the EPA strategic goals and objectives that the activities are expected to help meet. EPA guidance states that a PPG work plan should result from negotiations between EPA and state program managers and staff and reflect joint planning, priority setting, and mutual agreement between the 2 sides. For example, the work plan for state fiscal year 2015 for EPA’s and the California Department of Pesticide Regulation’s (DPR) PPG defined their partnership for a PPG that spanned state fiscal years 2013 to 2016. The document identified program areas in which DPR would undertake work. These included areas such as enforcing pesticide laws and ensuring worker safety from pesticides. DPR linked each of these program areas to the specific EPA strategic plan goals and objectives they supported. For instance, DPR’s work in the area of enforcing pesticide laws was linked to EPA’s strategic plan goal of protecting human health and the environment by enforcing laws and ensuring compliance, as well as its related strategic objective of enforcing environmental laws to achieve compliance. The work plan also identified specific work activities DPR planned to complete by the end of the state fiscal year. For example, in the program area of enforcing pesticide laws, DPR agreed to conduct 182 oversight inspections of the use of pesticides in agricultural operations. Officials from EPA and state agencies involved in the PPGs in our review described as a benefit how the 2 sides work together to develop PPG work plans, noting that the partnerships have strengthened their collaborative relationships. For example, officials from EPA Region 2 and the New York State Department of Environmental Conservation (DEC) said that EPA’s National Program Manager Guidance—biennial guidance from EPA program offices that establishes priorities and key actions to accomplish—serves as a framework for the activities that DEC will conduct through the PPG. Officials from Region 2 and DEC annually discuss how DEC priorities can be addressed within the framework and what appropriate goals and targets are for identified priorities. Officials from the New York DEC told us that the good working relationship they have with EPA Region 2 officials allows them to effectively work together to adjust or reconcile competing priorities when unexpected challenges arise. In contrast with the 1-on-1 partnerships in PPGs, first round disconnected youth pilots involve 2 or more federal partners and, in most cases, multiple grant recipients whose joint application was selected to participate in the pilot. The general structure for this partnership is shown in figure 5. The federal and non-federal organizations involved in each first round disconnected youth pilot defined the scope of their partnership in a performance partnership agreement. These agreements establish the terms and conditions under which the federal and non-federal partners will participate in the pilot and identify the specific outcomes the 2 sides will seek to achieve. The roles and responsibilities that the partners assume in developing and implementing the agreements are specific to the initiative. The federal agencies involved in developing the initiative, through a separate interagency agreement, established the following roles for federal partners in individual pilots: Lead agency: The lead agency is responsible for managing the performance partnership agreement. OMB designated Education as the lead agency for all 9 first round pilots. During the negotiations of the first round partnership agreements, Education coordinated the negotiations on behalf of, and in partnership with, all federal agencies involved, and worked with non-federal partners to finalize planned pilot goals and related performance measures. In addition, Education, as the lead agency, provides and oversees start-up grants to pilots. Consulting agency: The consulting agency leads pilot monitoring on behalf of the involved federal agencies. It does so by providing feedback on pilot performance reporting and facilitating communication among federal agencies and non-federal partners. Participating agency: The participating agency provides support to the lead and consulting agency by, as appropriate, providing feedback on pilot performance reporting and assistance to the other federal agencies and non-federal partners to address any implementation issues. Non-federal partners assume roles and responsibilities established in the performance partnership agreements. The partnership agreements for each of the first round pilots designate a state, local, or tribal government entity as the pilot lead. The pilot lead is responsible for ensuring that the pilot is carried out in accordance with applicable federal requirements and oversees the proper use of all federal funds. The agreement also identifies any additional non-federal partners, such as another government entity or a non-profit community organization, involved in the pilot and their roles and responsibilities. For example, in the Chicago pilot, the federal partners are Education as the lead agency, HHS as the consulting agency, and DOL as a participating agency. The non-federal partners, and their roles, are: The Chicago Department of Family and Support Services, the pilot lead, works to connect Chicago residents and families to resources that build stability, support their well-being, and empower them to thrive. The Chicago Cook Workforce Partnership, a pilot partner, will consult with the Department of Family and Support Services in implementation and oversight of the pilot. The organization is a collaborative effort between Cook County, Illinois and the City of Chicago designed to align the 2 entities’ efforts in delivering services under the Workforce Innovation and Opportunity Act (WIOA). The federal and non-federal partners in each first round pilot worked during their development of a performance partnership agreement to identify the pilot’s intended outcomes. In the partnership agreements, these intended outcomes take the form of quantitative goals and measures. Federal agencies require that at least 1 set of goals and measures address educational outcomes and a second set address employment outcomes. The 4 pilots in our review established education and employment goals and measures tied to their particular service interventions. The Oklahoma pilot, for example, is structured to help youth with foster care experience in the Oklahoma City Public Schools complete high school, attend college, and enter the workforce. Its partners established an education goal for 80 percent of youth who complete at least 6 months in the program to attain a high school diploma or its equivalent. The partners established an interim measure—that 85 percent of participants will be absent from school for 15 days or fewer during the school year—to track progress toward the goal. According to the pilot’s application, increased school attendance for participants is likely to lead to an improved high school graduation rate for them. Similar to PPGs, officials from federal and non-federal partners involved in disconnected youth pilots told us in our interviews that their partnerships have strengthened collaborative relationships with each other. For example, officials from the pilot lead for the Chicago pilot told us that the non-federal partners worked closely with HHS, the pilot’s consulting agency, and DOL, a participating agency, to gain a better understanding of the types of flexibilities they could use to implement the pilot. In addition, partners told us that the partnerships have strengthened collaboration among multiple non-federal partners working together at the state, tribal, or local level to implement disconnected youth pilots. For instance, officials from the pilot lead for the Oklahoma pilot told us that they have used the pilot development process to convene a wide range of organizations involved in addressing the needs of youth in foster care in Oklahoma City, the pilot’s target population, to establish a network broader than just those officially in the pilot. This convening enabled the pilot lead to identify organizations with which it previously had not worked that could contribute to improved outcomes for foster youth and bring them into the new network. However, federal and non-federal partners in disconnected youth pilots also told us that at times the multiple-partner structure of the pilots, along with their new and unique nature, has caused complications and delays in the pilots’ design and implementation. Officials we spoke with from several of the federal agencies involved in developing the initiative after its 2014 authorization said that, given their lack of familiarity with a legal provision like this, they had to spend time and effort to reach a common interpretation of the provision and how it could be implemented. This resulted in a longer than usual process for the agencies to develop and issue the notice inviting applications for the first pilot round. The first pilot round was authorized in January 2014 and Education released the notice inviting applications in November 2014. Federal and non-federal partners involved in pilots in our review told us that there were additional challenges related to negotiating and finalizing the partnership agreements, given the numerous parties involved. This led to additional time being spent on finalizing the agreements. The agreements for 6 of the 9 pilots were signed by their non-federal partners in December 2015 and January 2016, with the other 3 being signed by their non-federal partners between February and April 2016. Non-federal partners in the selected pilots also identified challenges in developing and coming to agreement on the goals and measures. Officials from the Eastern Kentucky pilot told us that it took time to work with federal agencies to agree on the goals and measures that were included in the final partnership agreement. The non-federal partners wanted to set goals for pilot participants to improve their academic performance to the average level of students in the Kentucky Highlands Promise Zone. Federal partners, however, requested that those goals be set at the average level of students in all of Kentucky—which is a higher average than of those just in the Promise Zone. The partners had discussions around the issue and agreed to establish goals at the higher level of students across the state, with recognition that these were stretch goals. Despite these challenges, several non-federal partners also told us that they see benefit in being able to establish goals and measures tailored to interventions, and not having to use standard federal performance measures, which may not always be useful in determining outcomes among their target populations. For example, officials from the Ysleta del Sur Pueblo pilot told us that they established performance measures for their pilot that will allow them to better determine the educational and employment outcomes of the tribal youth they plan to serve. Performance partnerships provide non-federal partners with flexibility in how they use funds from multiple federal programs. Partnerships can also provide non-federal partners with additional flexibilities, such as expanded program participant eligibility or streamlined reporting requirements. The non-federal partners can use these flexibilities hand-in-hand to tailor efforts to more effectively achieve their goals, as well as reduce their administrative burden. PPGs permit state agencies to request that funding they receive from 2 or more EPA program grants be combined into a single award. This is intended to enable state agencies to, among other things, consider trade- offs across the breadth of their environmental program funding and exercise flexibility to direct resources to their most pressing priorities. Specifically, once a state agency has requested and received selected EPA program grants in a PPG, it can choose to use the funds to support any activity that is eligible under at least 1 of the grants included in the PPG. PPGs also streamline administrative requirements so that state agencies can realize cost savings through reduced administrative burden in areas such as grant applications, cost sharing, and financial reporting. PPGs we examined in selected states exhibit variation in how state agencies chose to receive combined EPA program grant funding and exercise additional flexibilities. We selected states in which environmental agencies received either a small or a large number of grants combined in their PPGs. On the small end, we selected New York, whose DEC received funding from 3 grants, and California, whose DPR received funding from 4 grants in 1 environmental area (water and pesticides, respectively) in their PPGs in fiscal year 2016. On the large end, we selected Alabama, whose Department of Environmental Management (DEM) received funding from 9 grants, and Utah, whose Department of Environmental Quality (DEQ) received funding from 10 grants across multiple environmental program areas in their PPGs in fiscal year 2016. Figure 6 shows the grants in DEQ’s PPG in fiscal year 2016. Details on the PPGs in Alabama, California, and New York are included in appendix II. According to EPA guidance on PPGs, states can take different approaches to exercising their flexibility to direct resources to their most pressing priorities covered by any activity that is eligible under at least 1 of the grants included in the PPG. A state can propose using this funding to pool resources from multiple programs consolidated into the PPG to implement projects or initiatives that cross traditional program barriers. For example, a state can propose to conduct inspections to assess compliance across air, water, and hazardous waste management requirements if it has included grant programs in those areas in its PPG. A state can also, based on its environmental priorities, propose increasing resources and effort in 1 program area while decreasing resources and effort in a second program area. For example, if a state has identified that its needs in addressing water pollution are greater than its needs in addressing air pollution, and it has included relevant water and air grant programs in its PPG, it can propose to strategically increase resources and effort for water pollution activities while decreasing them for air pollution activities. States that intend to exercise programmatic flexibilities must explain the reasons for and expected benefits of the flexibilities in their PPG application. State agencies involved in PPGs also see great benefit in their ability to make use of flexibilities that reduce administrative burden related to grant applications, cost sharing, and financial reporting, according to the state agency officials whom we interviewed. Grant applications. A state agency can submit a single application covering all of the grants it is seeking to consolidate in its PPG rather than a separate application for each. Officials from the Alabama DEM told us that the streamlined PPG grant application requirement has allowed the DEM to submit a single PPG application rather than 9 individual grant applications, which it had to do prior to adopting the PPG. This has reduced the amount of administrative work that DEM staff must complete, thereby allowing them to focus on other activities. Cost-sharing. Certain EPA program grants require state agencies to provide a portion of program costs in order to receive the grant. Some grants require states to provide a certain percentage of total expenditures under the grant, known as a match requirement, while some others require states to spend non-federal funds for work conducted under the grant in an amount at least equal to those spent in a previous year, known as a maintenance of effort requirement. When a state combines grants in a PPG, it does not have to meet the individual cost-share requirements of the grants included in the PPG; instead, the state’s cost-share for a PPG is not less than the sum of the minimum required under each of the underlying grants included in it. According to EPA guidance on PPGs, the ability to meet cost- sharing requirements in the aggregate can be valuable when a state has more than adequate resources to meet the match required of 1 program included in the PPG but not enough for a second included program. The state can use excess match resources to cover the program that cannot meet its match requirement. Officials from the Utah DEQ said that the ability to meet match requirements in the aggregate is one of the most useful aspects of a PPG. They explained that multiple grants they include in their PPG have match requirements. Because of the DEQ’s line-item budget structure, the officials stated it would be challenging for them to meet the match requirements of specific program grants if they were not included in a PPG. However, because the PPG allows match requirements to be met at an aggregate level, DEQ’s expenditures in particular program areas can be added together to meet the overall match requirement. Financial reporting. A state agency can report on expenditures within the PPG in the aggregate, covering all grants consolidated in the PPG, rather than for each grant individually. Officials from each of the states in our review reported that they benefit from streamlined financial reporting. For example, officials from the Alabama DEM said that they provide 1 annual financial report for their PPG to EPA rather than the 9 that they were required to provide for individual grants before they adopted the PPG. They stated that this change has significantly reduced DEM’s administrative burden. Officials from EPA regional offices and state agencies we interviewed told us that existing organizational silos within EPA and state agencies can limit a state’s willingness and ability either to include EPA program grants across multiple environmental areas in a PPG or, in cases in which states have done so, to take full advantage of available funding flexibilities. For example, although officials from the New York DEC told us that their agency has been able to use Water Pollution Control and Water Non- Point Source Management funds to create a more integrated, comprehensive clean water program by including them in a PPG, they said that their agency’s structure makes it difficult for the agency to include programs from other environmental areas in a PPG and thereby take advantage of additional programmatic flexibility. They explained that their agency organizes its operating divisions by environmental areas, such as water and air. The management of a cross-area PPG would require coordination across divisions, which would require the divisions to make changes in the way they operate. The costs associated with making the changes necessary to administer such a PPG, the officials stated, would likely negate the benefits of the potential additional programmatic flexibility. Furthermore, officials from the Alabama DEM, which receives a PPG combining program grants across environmental areas, told us that once the agency receives a PPG from EPA it generally distributes the funding associated with each of the underlying grants to the program offices responsible for implementing them. In essence, the agency reverses the combining of the funding at the state level and uses it in much the same way it would if it received the funding from the underlying grants outside of a PPG. The officials explained that their agency uses PPG funding for the individual program-specific activities for which they were originally approved because officials within the relevant program offices at both EPA Region 4 and DEM have wanted to maintain control over the program funds they are responsible for managing and overseeing. Similar to PPGs, the disconnected youth pilots enable non-federal partners to combine funds from federal agencies’ programs and obtain additional flexibilities, but we found that for the disconnected youth pilots we reviewed, these flexibilities were generally used to tailor service interventions to the specific needs of their target populations rather than to reduce administrative burden. Specifically, the authorization for the first round pilots allows for the combining of discretionary funding that Education, HHS, DOL, CNCS, and IMLS received through the fiscal year 2014 appropriations act, and the waiver of statutory, regulatory, or administrative requirements affecting target populations, as proposed by the grantees, to carry out the pilots. This authorization included 2 safeguards on the use of these flexibilities, both which require written determinations by the head of an agency. First, an agency can participate in a pilot and combine funds only after its head provides a written determination that the agency’s participation will not result in denying or restricting the eligibility of individuals for any of the services that are funded by the agency’s programs or funds being used in the pilot, and that vulnerable populations who receive such services will not be otherwise adversely affected by the agency’s participation. In the notice inviting applications for first round pilots, applicants were advised that where a program’s funds are not suitable for combining (referred to as “blending”) in a pilot given these constraints, the applicant may nevertheless consider how to coordinate (referred to as “braiding”) such funding in a pilot to promote more effective and efficient outcomes even though the funds would maintain a separate identity and remain subject to the program requirements for which the funds were appropriated. Second, an agency also can waive program requirements associated with funds being used in a pilot, but only after its agency head issues a written determination that the granting of such waivers (1) are consistent with the statutory purposes of the underlying federal program and other provisions of the pilot authority, including that individuals will not be denied or restricted eligibility for services, (2) are necessary to achieve the outcomes of the pilot and no broader in scope than is necessary to do so, and (3) will result in either realizing efficiencies (by simplifying reporting or reducing administrative barriers) or increasing the ability of individuals to obtain access to services. In addition, for the first pilot round federal agencies awarded separate start-up grants to provide funding of up to $700,000 to each pilot to finance evaluations, capacity building, technical assistance and other related activities to support the pilot. According to officials from OMB, funds available for these purposes from CNCS, DOL, and Education were used for these start-up grants. Education officials told us that start-up grants also were in part intended to provide an incentive to non-federal partners to participate in and implement the disconnected youth pilots. Non-federal partners stated that the start-up grants were a key incentive to become involved with the pilots, as they represent a significant amount of new funding for them, which could help them work across traditional program lines, among other things. The first round pilots vary in the extent to which they use combined federal funds and waivers from selected program requirements in their pilots, as noted below and further illustrated in appendix III. Of the 9 pilots: 2, including Ysleta del Sur Pueblo, are combining all of the federal funds from all of the federal programs they are including in their pilots, 2, including Eastern Kentucky, are combining federal funds from some of the federal programs they are including as well as coordinating the use of federal funds from other federal programs, and 5, including Chicago and Oklahoma, are not using any combined funding but are instead coordinating the use of federal funds from multiple federal programs. Non-federal partners from the 2 pilots selected in our review that are combining all or some federal funds—Eastern Kentucky and Ysleta del Sur Pueblo—told us that they consider the ability to combine funds to be a benefit, as it allows them to implement more effective programs and services for disconnected youth. In addition, the use of combined funds reduces non-federal partners’ financial reporting burden. To ensure accountability for the proper use of combined funds and start-up grants in pilots, non-federal partners provide financial reports on their use of these funds to Education as the lead agency. In contrast, when a pilot has coordinated funds, the relevant non-federal partner(s) will report on the use of those funds separately, as prescribed by the originating agency per its normal guidelines for the program. Ysleta del Sur Pueblo officials told us that they see this reporting process as a key benefit of its pilot. Officials said the tribe was motivated to apply to become a pilot site to gain administrative benefits such as reducing the amount of reporting they normally would have provided to CNCS and IMLS—the 2 agencies with program funds being used in the pilot. Since the tribe is combining all funding involved in the pilot, it instead reports to Education on its use of those funds. In addition, 8 of the 9 first round disconnected youth pilots, including the 4 pilots selected for this review, requested and were granted waivers of selected requirements for at least 1 of the federal programs included in their pilots. These waivers provide pilots with additional flexibilities to tailor allowable activities, participant eligibility, and reporting requirements to better meet the needs of disconnected youth, according to the notice inviting applications for the first pilot round. Furthermore, non-federal partners in the 4 pilots in our review told us that, among other things, the waivers enable them to change eligibility requirements or the allowable use of select federal funds, which will allow them to implement innovative approaches tailored to disconnected youth. For these reasons, all of the federal and non-federal partners with whom we spoke told us that the flexibilities possible through the pilots—the ability to combine funds and obtain waivers from selected federal program requirements—are one of their biggest benefits. The Eastern Kentucky pilot illustrates variation in the use of combined and coordinated funding and waivers. As illustrated in figure 7, to improve the academic performance of disconnected youth, the pilot is using combined funding across DOL’s WIOA Title I Youth program and Education’s Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) and Promise Neighborhoods programs and coordinating the use of funding from Education’s Full Service Community Schools program. The Eastern Kentucky pilot was also granted 5 waivers that allow the non-federal partners to change eligibility requirements and the allowable use of certain federal funds. Officials from Partners for Education at Berea College (PFE), a pilot partner, told us that the various federal programs serving disconnected youth in Eastern Kentucky were focused on different aspects of the disconnected youth population. The non- federal partners had a difficult time providing comprehensive services to disconnected youth in the region since they had to serve distinct segments of the population with individual federal grants. Officials said that the waivers should help the non-federal partners expand the reach of their services. One of these waivers is related to a requirement in Education’s GEAR UP program, which is designed to help low-income students prepare for and succeed in post-secondary education. Traditionally, GEAR UP grant recipients can use the funds to provide mentoring, outreach, and other services to students for 6 years, usually between seventh grade and the completion of high school. They can also provide those services for a seventh year as long as during that year the student is enrolled in post-secondary education. The officials from PFE told us that the waiver allows them to provide services to youth who, during their seventh year, are not enrolled in post-secondary education, to help them connect to post-secondary education. Officials from the pilot told us that using some combined funds allows them to hire staff who are not tied to a specific program and therefore can work across the programs involved in the pilot. In addition, the officials from PFE told us that the start-up grant is allowing the non-federal partners to establish a data collection system to share information about the participants across partners. This would allow them to use those data to identify effective strategies and support the evaluation of the pilot. The waivers, the pilot officials added, should help the non-federal partners expand the reach of their services. Federal and non-federal partners involved in disconnected youth pilots told us that in some cases the 2 sides faced challenges in coming to agreement on the use of funds in pilots. For example, the non-federal partners in the Chicago pilot initially proposed combining Head Start funds from HHS with WIOA Title I Youth funds from DOL. According to documentation of HHS’s review of this request, HHS denied the request because officials were concerned that combining Head Start funds could adversely affect vulnerable populations—one of the restrictions for combining funds in the pilot authorization—by potentially diverting funds away from services for children in Head Start toward activities primarily targeted at teenage and young adult mothers, who are primarily served by the WIOA program. HHS agreed that the pilot could coordinate Head Start funds, as is illustrated in figure 8, since this would ensure that the funds retain their cost allocation requirements and therefore allow HHS officials to ensure accountability for the funds’ use to support services for children in Head Start. Performance partnerships use performance reporting to ensure accountability for results. Non-federal partners periodically report to federal partners on their progress towards the goals established in the partnership document. As was described in the prior section, non-federal partners also ensure accountability for the use of funds through financial reporting processes. To monitor progress toward the work activities and goals and measures established during performance planning and included in the work plans discussed above, federal and non-federal partners in PPGs engage in performance reporting. Related to PPGs, we reported in July 2016 that, according to EPA policies and officials, after EPA approves a work plan for an EPA grant, grantees generally submit information on their progress and results to EPA in 2 ways: (1) performance reports, which are generally written and describe the grantees’ progress toward the planned grant results in their work plans and (2) program-specific data, which is generally numeric and which grantees electronically submit on certain program measures that EPA tracks in various program databases. Performance reports. These reports describe the grantees’ progress toward the planned grant results in their work plans, such as using grant funds to provide technical assistance to local officials. EPA grantees are to submit these reports at least annually. EPA policies include general guidelines about what performance reports should include, such as a comparison between planned and actual grant results, but allow the frequency, content, and format of performance reports to vary by program and grant. According to EPA officials, EPA project officers monitor these reports to review grantee progress toward agreed-upon program results. PPGs enable state agencies to submit a single performance report for all the programs included in their PPGs, according to EPA and state agency officials with whom we spoke. We reviewed the most recently available end-of-year performance reports that the Utah DEQ submitted to EPA for federal fiscal year 2014 and the California DPR submitted to EPA for state fiscal year 2014-2015. For its report, DPR provided information about how it addressed each planned activity in its work plan. For example, the report stated that DPR conducted 253 oversight inspections of the use of pesticides in agricultural operations, exceeding its target of 182 oversight inspections. According to information in the report, EPA, after it received the report from the department, reviewed the material and provided comments, as needed. Program-specific data. Grantees electronically submit data on certain program measures, such as the number of hazardous waste violations issued or the acres of brownfield properties made ready for reuse, which EPA tracks in various program databases. According to EPA policy and program officials, program officials monitor these data to track and report program accomplishments, at the regional and agency levels, and, as applicable, to assess the agency’s progress in meeting its performance targets in support of agency strategic goals. According to EPA officials, generally grantees or EPA program officials—depending on the database—are to enter grant results, such as the number of enforcement actions, into EPA’s program-specific data systems at agreed-upon intervals, such as quarterly. These requirements may be part of a grant’s terms and conditions. For disconnected youth pilots, non-federal partners are to submit quarterly reports on progress made toward the goals and measures established in the pilots’ performance partnership agreements to Education in its role as the lead agency. This performance reporting covers the entire pilot and the programs included in it, regardless if the funds involved are combined or coordinated, according to Education officials. Education will then share the reports with the relevant consulting and participating agencies for their review. If the quarterly report shows that the pilot is facing challenges in making progress towards its goals, federal officials we met with told us that the agencies can work with the non-federal partners to address the challenges, which could include amending waivers, providing technical assistance, or requiring the pilot to develop a corrective action plan. Federal and non-federal partners in the disconnected youth pilots are also taking steps to conduct evaluations of pilot outcomes. DOL has responsibility for leading a national evaluation of the pilots and contracted with Mathematica Policy Research (Mathematica) to conduct it. Officials from DOL involved in the national evaluation told us that they have sought input, and involved individuals, from the other federal agencies in the ongoing design of the national evaluation, and that they will also have a chance to review the final evaluation plan. Officials from DOL and Mathematica told us that the national evaluation will focus on 4 major areas: technical assistance, implementation, outcomes, and impacts. According to officials from DOL and Mathematica, they have made, and will continue to make, adjustments to the design of the national evaluation to better align with the data collection efforts for the individual pilot evaluations and avoid duplicating efforts. In addition, the federal agencies established a competitive preference priority for first round pilot applicants whereby the agencies awarded extra points to applicants who proposed conducting a site-specific evaluation using a randomized control trial or quasi-experimental approach of at least one of the pilot’s components. Each of the pilots for the first round is conducting a site-specific program evaluation, according to DOL officials. The Ysleta del Sur Pueblo pilot, for instance, is using an impact evaluation to test the effectiveness of its Tigua Leadership Curriculum on 1 of the 2 cohorts of youth it plans to serve in the pilot. The evaluation includes treatment and comparison groups, though the pilot will provide the Tigua Leadership Curriculum to both groups. It will provide the services to the comparison group after the final data collection. Ysleta del Sur Pueblo will collect data on both groups, which will be evaluated to determine whether the new services had a positive effect on improving youth attitudes towards staying in school, completing high school, and understanding the connection between education and career development opportunities. This evaluation will be conducted in addition to the pilot’s collection and reporting of performance against specific performance measures for participants’ educational and employment outcomes. Federal and non-federal officials with whom we spoke identified key benefits of the national-level and site-specific evaluations. Education officials said that both the site-specific and the national-level evaluations will help federal agencies to determine if there is a need for broader legislative authority to allow more grant recipients in different locations to propose waivers similar to those received by the first round pilots. Officials from the Chicago pilot lead told us that the evaluations will help both the federal and non-federal partners learn more about what works within their program, which could help them improve it in the future. However, officials from the Oklahoma pilot lead told us that while evaluations are important for tracking if pilots are meeting their intended results, they were unsure whether the site-specific and national-level evaluations would be able to determine what, if any, impact the ability to consolidate funds and use waivers would have on improved outcomes for disconnected youth. They said that the pilot’s 3-year timeframe may be too short to allow federal or non-federal partners to clearly determine whether the pilot has improved outcomes for disconnected youth or created meaningful systems change. For example, they said that the partners will likely be unable to determine whether the pilot has improved youth involvement in post-secondary education or employment outcomes before the pilot’s timeframe, and the funding to support the evaluations, expires. The federal agencies involved in the disconnected youth pilots have taken a number of actions consistent with leading practices for interagency collaboration identified in our prior reports. In September 2012, we reported that federal agencies have used a variety of mechanisms to implement interagency collaborative efforts, which can be used to address a range of purposes including policy development; program implementation; oversight and monitoring; information sharing and communication; and building organizational capacity, such as staffing and training. We noted that although collaborative mechanisms differ in complexity and scope, they all benefit from certain leading practices, which raise issues to consider when implementing these mechanisms. Table 1 provides examples of how actions taken by federal agencies in designing and implementing the disconnected youth pilots generally were consistent with the 7 leading practices identified in our September 2012 report. Although federal agencies generally have taken actions consistent with leading practices for interagency collaboration, we identified additional actions they could take in relation to several leading practices to better support the success of the individual disconnected youth pilots as well as the overall initiative. In response to our findings, the agencies already have taken steps to address issues we identified in the areas of written guidance and agreements and participants during the course of our audit. However, their planning for and management of financial and staff resources are not yet in line with leading practices. Written Guidance and Agreements. We previously found that written agreements for interagency groups are most effective when updated and monitored regularly to reflect the roles and responsibilities of current participants. Such agreements can help strengthen agency commitment to working collaboratively and provide a clear delineation of activities to be undertaken by individual agencies. Additionally, updated written agreements can serve as a source of current information in the case of staff transitions. In assessing the federal agencies’ efforts related to written guidance and agreements, we found that the 2015 interagency agreement covered roles and responsibilities for the federal agencies for the first 2 pilot rounds, but as of July 2016, it had not been revised to reflect the third pilot round or the authorization for HUD to participate in the initiative. In discussing these findings with Education officials in July 2016, they stated several reasons why this had not happened. First, they did not expect established roles and responsibilities to change in future pilot rounds. Second, they stated that federal agencies had been focused on implementing the first round pilots and preparing for the second and third round pilots. Finally, they told us that the process for updating the agreement was cumbersome, as it required the approval of the heads of all agencies involved in the initiative. Subsequently, in December 2016, Education officials told us that the agencies, in response to the issues we identified, were taking steps to modify the interagency agreement to cover the third pilot round and include HUD’s roles and responsibilities. Moreover, they agreed to streamline the process for making future changes to the agreement. In January 2017, Education shared a draft update to the interagency agreement reflecting these changes, which are pending final approval by the relevant agencies. Participants. We previously found that it is important to ensure that all relevant participants have been included in a collaborative effort. For all agencies authorized to participate in a particular initiative, such as the disconnected youth pilots, their collective involvement helps ensure that someone can commit resources and make decisions on behalf of their agency, and contribute to the outcomes of the collaborative effort through their individual knowledge, skills, and abilities. We identified a few instances in which officials from federal agencies with program funds being used in individual pilots were not notified of their funds’ planned use. For example, the non-federal partners in both the Oklahoma and Seattle pilots plan to use AmeriCorps funding. However, in a June 2016 meeting with CNCS officials about their involvement in the pilots, the agency’s key point of contact for the disconnected youth pilots told us that she was not aware of those pilots’ planned use of the funding. When we raised this issue in a subsequent meeting with Education officials in July 2016, they told us that in these instances they did not notify CNCS because their process only involved notifying an agency in cases in which the non-federal partners proposed combining an agency’s funds or requested a waiver of requirements related to that agency’s programs in a pilot. The non-federal partners in the Seattle and Oklahoma pilots did not propose combining CNCS funds or request waivers of AmeriCorps program requirements. Therefore, according to Education officials, the established process did not require them to notify CNCS about the use of AmeriCorps funds in those pilots because CNCS did not need to approve anything. Because the funds are being coordinated, the recipients are still required to adhere to the requirements of the program for which the funds were appropriated, including reporting on the funds in accordance with the program’s underlying requirements. As such, CNCS officials told us that the use of AmeriCorps program funds in those 2 pilots would be covered by the program’s usual grant oversight processes. In response to our observations and to address any potential future issues about involving relevant agency officials in individual pilots, Education officials told us in December 2016 that they have revised their processes. Moving forward, they will notify relevant agency officials of instances in which their programs and funds are proposed for inclusion in a pilot, regardless of whether funds are combined or coordinated or waivers are sought. Resources. We previously found that collaborating agencies should identify the various resources, including financial and staff resources, needed to initiate and sustain their collaborative effort. Collaboration can take time and resources in order to accomplish activities such as building trust among the participants, setting up the ground rules for the process, attending meetings, conducting project work, and monitoring and evaluating the results of work performed. Moreover, relying on agencies to participate can present challenges for collaborative mechanisms. Our past work has also found that, in cases where staff participation was insufficient, collaboration often failed to meet key objectives and achieve intended outcomes. Consequently, it is important for groups to ensure that they identify and leverage sufficient funding and staffing to accomplish the objectives. Agencies do not have a full understanding of their future resource needs—in terms of individual agency funds and staffing contributions—to maintain the pilot initiative through September 2020, when the third round is currently scheduled to end. Federal and non-federal partners in the disconnected youth pilots told us that these resources are important for its success. Agency officials told us that, because much of their early focus was on designing and implementing the initiative in short time frames, decisions related to resource contributions generally were made as needed to ensure near-term progress. Now that agencies have designated finalists for the second and third rounds, they can better identify and plan for the resources they will need to contribute to support the overall initiative through its lifetime (September 2020). However, the agencies—including OMB which has responsibility for coordinating agencies’ overall efforts to implement the disconnected youth pilots— have not yet fully identified how those resources will be provided. Key aspects of the pilot initiative rely on funding and staff contributed by individual agencies. As such, it is important for each agency to understand what resources it is expected to provide so that it can plan accordingly. By fully identifying and planning for the specific financial and staff resource contributions described below, the agencies will have greater assurance that those contributions will sustain success in their collaborative efforts and the overall pilot initiative. Financial Resources. As was previously mentioned, the funding for federal grants included in individual pilots is provided through appropriations. According to agency officials involved in the pilots, they identified additional activities that they considered key to the success of the initiative. As highlighted in Table 1, these include the start-up grants, general technical assistance, and the national and site-specific evaluations. Education and DOL leveraged various mechanisms, including contracts and interagency transfers, and other agencies contributed funds, to the extent possible, to support these activities. According to agency officials, their agencies consider potential contributions on a year-to-year basis, depending on the available resources in eligible programs. They told us that their agencies have needed to make trade-off decisions among competing priorities to contribute funds to these activities each year. As a result, funding for some of these activities has decreased over time. For example, although pilots could receive up to $700,000 in start-up grants for the first round, they may receive up to $350,000 for the second round and up to $250,000 for the third round due to decreased contributions of funding for such purposes from among the federal agencies. As was noted earlier, the offer of start-up grants was a key incentive for non-federal partners from the 4 selected pilots as they considered participating in the initiative. Agencies already have made their contributions for the start-up grants for the 3 rounds currently authorized, so this may not be an issue moving forward unless further rounds are authorized. However, the contract for general technical assistance has the potential, through annual extensions which require additional funding, to run through 2020, when the third round pilots are due to end. Moreover, although the contract for the national evaluation and related evaluation technical assistance runs through 2020, work related to completing the evaluation—which is necessitated by the legislation authorizing the pilots—will need to continue after this date if the national evaluation is expanded to include the pilots from the second and third rounds. To do so, DOL officials told us that they may be able to extend the prior performance to continue the work under the existing contract or otherwise award a new contract following federal acquisition procedures. Regardless of what approach is taken, continued performance would require additional funding. Staff Resources. Incomplete information about existing staff investments in the disconnected youth pilots to date, and uncertainty in the composition of future pilot rounds has limited agencies’ abilities to identify long-term staffing needs. Most agencies are not tracking their current staff support to the first round pilots, which could help them understand potential staff investments for future rounds. When we asked agency officials for estimates of staff time spent supporting the pilot initiative, they could not provide overall estimates for their agencies. Similarly, when we asked the key points of contact at each agency—those generally most involved in the disconnected youth pilots—about their own time spent on the initiative, they could only provide rough estimates. Officials explained that several factors made it difficult to track staff investment. The number of staff involved and their related time demands varied depending on the stage of the process and the activities that were being undertaken. Moreover, at different points in the past, the agencies were designing and implementing multiple rounds concurrently. In addition, in interviews with officials from each of the agencies involved in the pilots during summer 2016, officials told us that their agencies face difficulties anticipating future staffing needs since the exact number and structure of the second and third round pilots were unknown at the time. Officials said their agencies could not sufficiently plan for staff needs beyond what had been established in the first round of pilots without knowing the number of pilots that would be selected for each round, and which federal agencies and programs would be involved. However, the selection processes for both the second and third rounds are now complete. As was mentioned previously, the agencies designated 1 applicant as a second round pilot finalist in September 2016, and 6 applicants as third round pilot finalists in January 2017. Therefore, agencies should have a better sense of their involvement in all 3 rounds. Moreover, because all 3 rounds will soon be in the implementation phase, it should be easier for agencies to track current staff investments to better project needed staffing contributions moving forward. Federal agency efforts for the disconnected youth pilots were also generally consistent with practices for effective pilot design. A well- developed and documented pilot program can help ensure that agency assessments produce information needed to make effective program and policy decisions. Such a process enhances the quality, credibility, and usefulness of evaluations in addition to helping to ensure that time and resources are used effectively. In April 2016, we identified 5 leading practices that, taken together, form a framework for effective pilot design. By following these leading practices, agencies can promote a consistent and effective pilot design process. Examples of actions federal agencies took in line with these practices are illustrated in table 2. Although federal agencies generally took actions consistent with leading practices for effective pilot design, we identified an additional action they could take in relation to assessing scalability. We previously found that, as part of their design, agencies should have criteria or standards for identifying lessons about the pilot to inform decisions about scalability and whether, how, and when to integrate pilot activities into overall efforts. To assess scalability, criteria should relate to the similarity or comparability of the pilot to the range of circumstances and population expected in full implementation. The criteria or standards can be based on lessons from past experiences or other related efforts known to influence implementation and performance as well as on literature reviews and stakeholder input, among other sources. The criteria and standards should be observable and measureable events, actions, or characteristics that provide evidence that the pilot objectives have been met. Choosing well-regarded criteria against which to make comparisons can lead to strong, defensible conclusions. Although the federal agencies have identified a variety of data to collect through performance reporting and the national and pilot-level program evaluations, they did not identify criteria or standards for assessing scalability of the flexibilities being tested by the pilots as part of the pilot or evaluation design processes. The agencies conducted outreach to stakeholders to learn about the needs of organizations assisting disconnected youth, established key goals and objectives for the organizations implementing the pilots and included them in partnership agreements, which also detail data collection methodology, reporting requirements, and interim measures. They also developed evaluation plan reporting templates for the pilots. However, according to DOL officials, federal agencies decided not to require any new data collection by the pilots for several reasons, including the potential costs to pilots. Therefore, the pilots are only reporting performance data that they would routinely collect from their program participants, with the possibility of reporting any new data they may be collecting for their local evaluations. DOL officials also told us that they and their contractor for the national evaluation are not yet thinking about the potential scalability of the various flexibilities being tested by the pilots because it is too early in the implementation period—in the first year of the 5-year pilots. They said that the national evaluation is intended to review the flexibilities that each pilot uses and determine what recommendations can be made to decision makers, such as Congress, for broader application. Officials then plan to combine the information on flexibilities with final outcome data and evidence from the site-specific impact evaluations to assess and synthesize, as appropriate, the individual pilots. DOL officials also said that they plan to examine the partnership structures for the pilots to determine the effectiveness of the structures. While the plans DOL officials have laid out could, and likely will, provide useful information and insights into the success of the individual pilots and overall initiative, they may not be collecting the data needed to inform conclusions about scaling the flexibilities tested by the pilots without first determining the criteria or standards for such an assessment. Going forward, such data would better position Congress to decide whether and to what extent the flexibilities tested by the pilots should be integrated into broader efforts. Performance partnerships are part of a broader federal effort to align federal grantmaking priorities with state and local government needs in addressing key national objectives. One of the 2 existing partnership initiatives—the multi-agency disconnected youth pilots—allows partners to collaborate across organizational lines primarily to leverage programmatic flexibilities that enable them to combine funding from across several federal grant programs (and agencies) for interventions aimed at improving outcomes among disconnected youth. In establishing the disconnected youth pilots, federal agencies generally took actions consistent with leading practices for interagency collaboration and pilot design. For example, agencies established and documented key roles and responsibilities, including those related to leadership, in an interagency agreement. In addition, they established goals and objectives for the individual pilots and overall initiative, and have plans to monitor performance and evaluate results. However, agencies—including OMB, which has responsibility for coordinating agencies’ overall efforts to implement the disconnected youth pilots—have not fully identified the key financial and staff resources each agency will need to contribute over the lifetime of the initiative. Doing so would help ensure they are able to provide the support needed for successful implementation of the pilots and to sustain their collaborative efforts. In addition, the agencies have not identified criteria or standards to assess the scalability of the flexibilities being tested by the pilots. Without them, agencies may not collect the data they—and ultimately Congress—will need to determine whether and how to implement successful approaches more broadly. To help ensure that the pilot programs for disconnected youth can be effectively implemented over the lifetime of the initiative, the Director of OMB should coordinate with relevant federal agencies to identify and estimate expected annual financial and staff resource contributions from each agency, including during the implementation and evaluation phases of the pilots. To ensure that federal agencies involved in the disconnected youth pilots are able to evaluate pilot outcomes and ultimately communicate to Congress whether and to what extent the flexibilities tested by the pilots should be integrated into broader efforts, the Director of OMB should coordinate with relevant federal agencies to identify criteria or standards for assessing scalability, and collect data needed to address those criteria or standards. We provided a draft of this report to the Director of the Office of Management and Budget, Secretary of the Department of Education, Secretary of the Department of Health and Human Services, Acting Secretary of the Department of Housing and Urban Development, Attorney General of the Department of Justice, Acting Secretary of the Department of Labor, Chief Operating Officer of the Corporation for National and Community Service, Administrator of the Environmental Protection Agency, and Acting Director of the Institute of Museum and Library Services for review and comment. In comments provided by email, OMB’s Liaison to GAO stated that OMB neither agreed nor disagreed with the recommendations in this report. OMB staff also provided oral comments in which they asked us to clarify that (1) OMB’s role is to coordinate agencies’ overall efforts to implement the disconnected youth pilots, (2) the resource issues identified in our report involve agencies better identifying and planning for their individual contributions to the pilot initiative, and (3) our discussion of scalability is focused on the flexibilities being tested by the pilots. We revised the report accordingly to provide these clarifications. DOL, Education, EPA, HHS, and IMLS provided technical comments, which we incorporated as appropriate. CNCS, DOJ, and HUD informed us that they had no comments. We are sending copies of this report to the appropriate congressional committees, the heads of each of the federal agencies included in this review, and other interested parties. In addition, the report is available at no charge on the GAO website at https://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or by email at bawdena@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The GPRA Modernization Act of 2010 (GPRAMA) put into place a framework intended to increase the use of performance information and other evidence in federal decision making. GPRAMA also requires us to periodically report on how its implementation is affecting performance management at federal agencies, including whether agencies are using performance management to improve the efficiency and effectiveness of their programs. According to the Office of Management and Budget (OMB), because performance partnerships require federal agencies and their grant recipients to manage toward agreed upon outcomes, they can help the 2 sides to collect information and evidence about what works and therefore how to employ federal resources more efficiently. We therefore conducted this review under our GPRAMA reporting requirement. There are currently 2 sets of federal performance partnerships that have been authorized. The Environmental Protection Agency’s (EPA) Performance Partnership Grants (PPG) under its National Environmental Performance Partnership System have been in place for 20 years. The multiple-agency Performance Partnership Pilots for Disconnected Youth (disconnected youth pilots) were authorized in January 2014. Over the past 3 years, 8 federal agencies—OMB; the Departments of Education (Education), Health and Human Services (HHS), Housing and Urban Development (HUD), Justice (DOJ), and Labor (DOL); the Corporation for National and Community Service (CNCS); and the Institute for Museum and Library Services (IMLS)—have worked to implement the pilots. This report identifies the key characteristics of those 2 existing performance partnership initiatives. It also provides a more in-depth review of the design, implementation, and evaluation of 1 of the 2—the Performance Partnership Pilots for Disconnected Youth. To address these objectives, we identified current federal performance partnerships by reviewing relevant literature, including of our past work on grants management and material produced by public sector and nonprofit organizations; searching public laws for references to “performance partnerships”; and interviewing former federal officials knowledgeable about the federal government’s efforts to develop and implement performance partnerships. Based on this work, we determined that EPA’s PPGs and the disconnected youth pilots were the only 2 federal performance partnerships currently authorized. To identify key characteristics of performance partnership initiatives and how these key characteristics are exhibited, we collected, reviewed, and analyzed documents about the overall performance partnership initiatives, such as authorizing legislation, regulations, and notices inviting applications, as well as from selected individual performance partnerships within them, including applications, performance partnership agreements, and grant work plans. To further illustrate how these key characteristics are exhibited by the 2 performance partnerships, we selected a non-generalizable sample of 4 states with EPA PPGs and 4 disconnected youth pilots for in-depth review. To select a sample of states with PPGs, we used data that EPA provided to us on PPG use in fiscal year 2014. We selected states based on the number of grants they included in their PPG, the number of years they had a PPG, and the EPA region in which they are located. We chose 1 state with a high total number of years in a PPG and a high total number of grants in a PPG; a second with a high total number of years in a PPG and a low total number of grants in a PPG; a third with a low total number of years in a PPG and a high total number of grants in a PPG; and a fourth with a low total number of years in a PPG with a low total number of grants in a PPG. We defined high and low as being within the top or bottom quartile of states in each category. We ensured that each of our selections was from a different EPA region since variations in management practices and general environmental needs across EPA regions may impact how states use PPGs. The states we selected based on these criteria were Utah, New York, Alabama, and California. To select a sample of disconnected youth pilots, we considered pilots with various locations (urban, rural, tribal), consulting agencies, and federal programs used. First, we grouped pilots by their urban, rural, and tribal designations to ensure that we could examine pilots in different environments. We selected the only 2 pilots designated as rural and tribal—Eastern Kentucky and Ysleta del Sur Pueblo. Next, we grouped pilots by the consulting agencies responsible for overseeing them to ensure that we could examine whether pilot oversight differs among federal agencies. Four agencies serve as consulting agencies— Education, DOL, HHS, and CNCS. Education and CNCS serve as the consulting agencies for Eastern Kentucky and Ysleta del Sur Pueblo, respectively, so we then selected Chicago from the group of urban pilots left for selection since it is the only pilot for which HHS serves as the consulting agency. Finally, we looked to ensure that each federal agency’s programs were being used in at least 1 of our selected pilots. Since all of the agencies had at least 1 program being used in our first 3 selections, we selected Oklahoma as our final pilot, as DOL is its consulting federal agency and it is using programs from 4 of the 5 agencies. To obtain perspectives on the key characteristics of these performance partnership initiatives, including reporting benefits and challenges they may present, we conducted semi-structured interviews with officials involved in them. For PPGs, we met with relevant officials at EPA headquarters and regional offices, as well as officials from state environmental agencies in the 4 selected states. These included officials from EPA’s Office of Congressional and Intergovernmental Affairs; EPA Regions 2, 4, 8, and 9; and the Alabama Department of Environmental Management, the California Department of Pesticide Regulation, the New York State Department of Environmental Conservation, and the Utah Department of Environmental Quality. For the disconnected youth pilots, we conducted semi-structured interviews with officials from each of the federal agencies currently involved in them—OMB, Education, HHS, HUD, DOJ, DOL, CNCS, and IMLS. We also conducted semi-structured interviews with representatives from a federal and non-federal partner for each of the individual partnerships. For the disconnected youth pilots, these included Education and Partners for Education at Berea College (Eastern Kentucky); CNCS and the Ysleta del Sur Pueblo tribal government (Ysleta del Sur Pueblo); HHS and the Chicago Department of Family Support Services (Chicago); and DOL and the Oklahoma Department of Human Services (Oklahoma). To assess federal agencies’ efforts to design, implement, and evaluate the disconnected youth pilots, we obtained and reviewed key documents, including requests for information and public comment, preliminary design papers, interagency agreements, and evaluation plans. In addition, we interviewed officials from OMB, Education, HHS, HUD, DOJ, DOL, CNCS, and IMLS about their collaboration and design of the pilot initiative. We then assessed agencies’ efforts in these areas against leading practices to determine the extent to which these efforts reflect leading practices for interagency collaboration and effective pilot design. These leading practices were developed in our prior work. We conducted this performance audit from July 2015 to April 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Environmental Protection Agency’s (EPA) performance partnership grants (PPG) permit state agencies to request that funding they receive from 2 or more EPA program grants be combined into a single award. This is intended to enable state agencies to, among other things, consider trade-offs across the breadth of their environmental program funding and exercise flexibility to direct resources to their most pressing priorities. Specifically, once a state agency has requested and received selected EPA program grants in a PPG, it can choose to use the funds to support any activity that is eligible under at least 1 of the grants included in the PPG. PPGs also streamline administrative requirements so that states agencies can realize cost savings through reduced administrative burden in areas such as grant applications, cost sharing, and financial reporting. The number of grants that states request to include in PPGs varies. For example, in fiscal year 2016, the 39 states that had PPGs managed by their environmental or health agency included between 2 to 16 grants in their PPGs, according to EPA data. Approximately half of these agencies included 6 or fewer grants in their PPGs, with the other half including 6 or more. To illustrate variation in how state agencies chose to receive combined EPA program grant funding in PPGs, we selected to review 4 in greater depth that combined either a small or a large number of grants in their PPGs. On the small end, we selected New York, whose Department of Environmental Conservation received funding from 3 grants (illustrated in figure 9), and California, whose Department of Pesticide Regulation received funding from 4 grants in 1 environmental area (water and pesticides, respectively, illustrated in figure 10) in their PPGs in fiscal year 2016. On the large end, we selected Alabama, whose Department of Environmental Management received funding from 9 grants (illustrated in figure 11 below), and Utah, whose Department of Environmental Quality received funding from 10 grants across multiple environmental program areas in their PPGs in fiscal year 2016 (previously illustrated in figure 6). The multi-agency disconnected youth pilots enable non-federal partners to combine funds from federal agencies’ programs and obtain additional flexibilities to tailor service interventions to the specific needs of their target populations. Three rounds of pilots have been authorized, and the first round of 9 pilots approved is underway. Specifically, the authorization for the first round pilots allows for the combining of discretionary funding received by the Departments of Education (Education), Health and Human Services (HHS), and Labor (DOL), as well as the Corporation for National and Community Service (CNCS) and the Institute of Museum and Library Services (IMLS) through the fiscal year 2014 appropriations act. In the notice inviting applications for first round pilots, applicants were advised that where a program’s funds are not suitable for combining (referred to as “blending”) in a pilot given the constraints under the pilot authorization, the applicant may nevertheless consider how to coordinate (referred to as “braiding”) such funding in a pilot to promote more effective and efficient outcomes even though the funds would maintain a separate identity and remain subject to the program requirements for which the funds were appropriated. In addition, for the first pilot round federal agencies awarded separate start-up grants to provide funding of up to $700,000 to each pilot to finance evaluations, capacity building, technical assistance and other related activities to support the pilot. According to officials from OMB, funds available for these purposes from CNCS, DOL and Education, were used for these start-up grants. The first round pilots vary in the extent to which they are using combined federal funds in their pilots. Our selection of case study pilots also reflects this variation, as noted below. Two of the 9 pilots are combining all federal funds from all of the federal programs they are including in their pilots. For example, the Ysleta del Sur Pueblo pilot is consolidating funding from CNCS’s AmeriCorps program and IMLS’s Native American Library Services Enhancement Grants program, as illustrated in figure 12. According to Ysleta del Sur Pueblo’s pilot application, while the tribe has had success in using the CNCS and IMLS programs individually to meet short-term outcomes for tribal youth, such as cultural and traditional engagement and enhanced knowledge of the services and programs the tribe offers, it has been unable to use the programs to provide a continuum of services to help tribal youth meet longer-term goals, such as staying in school, enrolling in post-secondary education, and obtaining gainful employment. The application states that the tribe’s ability to consolidate the 2 programs’ funding will allow its Empowerment and Economic Development Departments to better collaborate and more effectively provide youth services in a single, wraparound program. In addition, officials from Ysleta del Sur Pueblo told us that the start-up grant the tribe received will further enhance its ability to implement its pilot. The tribe’s application stated that, among other things, the funds will help the tribe to hire staff and support the evaluation. Five pilots are not using any combined funding but are instead coordinating the use of federal funds from multiple federal programs. These include the Oklahoma and Chicago pilots we selected to review in- depth. For example, the Oklahoma pilot is coordinating funds from 3 federal programs: DOL’s WIOA Title I Youth program, HHS’s Now Is the Time: Healthy Transitions program, and CNCS’s AmeriCorps program. Figure 13 illustrates the structure of the Oklahoma pilot. Officials from Oklahoma DHS told us that the non-federal partners involved in the pilot expressed a preference to manage and report on the use of their federal funds independently. They told us that the organizations determined that they would be able to operate the pilot as intended by coordinating their use of funds. They had no prior experience combining funds across organizations, and Oklahoma DHS, as the pilot lead, did not want to track other organizations’ funds. Oklahoma DHS officials also told us that start-up grants have been beneficial in allowing the non-federal partners to hire a staff member to oversee pilot operations, support collaboration among partners, and prepare for and conduct an evaluation of the pilot. The structure of the Chicago pilot, illustrated in figure 8, along with a discussion of challenges federal and non-federal officials faced in coming to agreement on the use of funds for that pilot, were previously described above. The remaining 2 pilots are combining federal funds from some of the federal programs included in their pilots and coordinating the use of funds from other federal programs. One of them is the Eastern Kentucky pilot, and details about its use of federal funds were discussed earlier in the report and illustrated in figure 7. Officials from a pilot partner told us that combining some funds allows them to hire staff who are not tied to a specific program and therefore can work across the programs involved in the pilot. In addition, the officials added that the start-up grant the pilot received will allow the non-federal partners to establish a data collection system to share information about the participants across partners. This would allow them to use those data to identify effective strategies and support the evaluation of the pilot. In addition to the above contact, Benjamin T. Licht (Assistant Director) and Daniel Webb (Analyst-in-Charge) supervised this review and the development of the resulting report. Mitchell Cole, Karin Fangman, Mike Grogan, John Hussey, Sherrice Kerns, Ifunanya Nwokedi, Michelle Sager, Cindy Saunders, and Stephanie Shipman made significant contributions to this report. Donna Miller developed the graphics for this report. Theodore Alexander, Crystal Bernard, Kathleen Drennan, Shirley Hwang, Adam Miles, Keith O’Brien, Laurel Plume, Erik Shive, and Wes Sholtes verified the information in this report.
The GPRA Modernization Act of 2010 established a framework intended to increase federal agencies' use of performance information and evidence in decision making. In performance partnerships, agencies and grant recipients manage toward outcomes, which can help measure program performance and collect evidence about what works to achieve desired outcomes. OMB has encouraged the use of such partnerships by agencies that make federal grants. GAO is required by the act to report on how its implementation is affecting federal agency performance management. This report identifies the key characteristics of 2 existing performance partnerships. It also provides an in-depth review of the design, implementation, and evaluation of 1 of the 2 initiatives—the disconnected youth pilots. To address these objectives, GAO reviewed relevant laws, regulations, and documents and selected 8 illustrative examples from the 2 partnership initiatives (4 each), based on various criteria, such as the type and number of grants included and location. GAO also interviewed federal and non-federal officials involved in these partnerships. Congress has authorized 2 federal performance partnership initiatives. The Environmental Protection Agency's (EPA) Performance Partnership Grants (PPG) has been in place for 20 years and allows state agencies to consolidate funds from up to 19 environmental program grants into a single PPG. The other, Performance Partnership Pilots for Disconnected Youth (disconnected youth pilots), is a more recent initiative authorized in 2014 that allows funding from multiple programs across multiple agencies to be combined into pilot programs serving disconnected youth. GAO identified 4 key characteristics shared by the 2 federal performance partnership initiatives. Specifically: 1. Documented agreement . Federal and non-federal partners identify goals, roles, and responsibilities. EPA and state agencies accomplish this through a PPG work plan. For each disconnected youth pilot, multiple federal agencies and non-federal partners, such as local government agencies and community-based organizations, use a performance partnership agreement. 2. Flexibility in using funding. PPGs combine funding from 2 or more EPA program grants. The disconnected youth pilots can combine funding from multiple programs across the agencies involved in the initiative. 3. Additional flexibilities. PPGs reduce administrative burden for state agencies, for example, by requiring only a single application for all grants in them. Disconnected youth pilots also provide non-federal partners flexibility to serve disconnected youth, including the ability to better tailor service interventions to their target populations. 4. Accountability for results . In both initiatives, non-federal partners report to federal partners on progress towards mutually-established goals. Partners in the disconnected youth pilots are also assessing results through national and pilot-specific program evaluations. GAO's in-depth review of the disconnected youth pilots found that agencies had taken actions consistent with leading practices for collaboration and pilot design, such as establishing a leadership model for collaboration. Although the Office of Management and Budget (OMB) is responsible for coordinating agencies' overall efforts to implement the pilots, GAO identified additional actions that OMB should take in coordination with the agencies to help ensure future success. Resources. Agencies have not fully identified the funding and staff resources each will need to contribute to sustain their efforts over the lifetime of the pilots. This is because agencies primarily have been focused on meeting near-term needs to support design and implementation. By fully identifying specific future financial and staff resource needs, agencies can better plan for their individual contributions to ensure they are sufficient to support the pilots. Scalability. Agencies have not developed criteria to inform determinations about whether, how, and when to implement the flexibilities tested by the pilots in a broader context (this is known as scalability). Although the agencies identified a variety of data to collect, they have not identified criteria for assessing scalability. Officials involved in the pilots told GAO it was too early in pilot implementation to determine such criteria. By not identifying these criteria during the design of the pilots, they risk not collecting needed data during their implementation. GAO recommends that OMB coordinate with federal agencies implementing the disconnected youth pilots to identify (1) agency resource contributions needed for the lifetime of the pilots and (2) criteria and related data for assessing scalability. OMB neither agreed nor disagreed with these recommendations.
Generally, a public company’s board of directors is responsible for managing the business and affairs of the corporation, including representing a company’s shareholders and protecting their interests. Corporate boards range in size and according to a 2013 survey of public companies, the average board size was about nine directors, with larger companies often having more. Corporate boards are responsible for overseeing management performance on behalf of shareholders and selecting and overseeing the company’s CEO, among other duties, and directors are compensated for their work. The board of directors generally establishes committees to enhance the effectiveness of its oversight and focus on matters of particular concern. See figure 1 for common corporate board committees and their key duties. Research and other literature provide a number of reasons as to why it is important for corporate boards to be diverse. For instance, research has shown that the broader range of perspectives represented in diverse groups require individuals to work harder to come to a consensus, which can lead to better decisions. Some research has found that gender diverse boards may have a positive impact on a company’s financial performance, but other research has not. These mixed results depend, in part, on differences in how financial performance was defined and what methodologies were used. Various reports on board diversity also highlight that diverse boards make good business sense because they can better reflect the employee and customer base, and they can tap into the skills of a wider talent pool. Publicly traded companies are required by the SEC to disclose to their shareholders certain corporate governance information for shareholder meetings if action is to be taken with respect to the election of directors. Companies disclose this information in proxy statements that are filed with the SEC.The SEC’s mission includes protecting investors, and disclosure is meant to provide investors with important information about companies’ financial condition and business practices for making informed investment and voting decisions. Investors owning shares in a company generally have the ability to participate in corporate governance by voting on who should be a member of the board of directors. In December 2009, the SEC published a rule that requires companies to disclose certain information on board diversity in proxy statements filed with the Commission if action is to be taken with respect to the election of directors, including whether, and if so how, boards consider diversity in the director nominating process. Also, if boards have a policy for considering diversity when identifying director nominees, they must disclose how this policy is implemented and how the board assesses the effectiveness of its policy. According to various publications on corporate governance or gender diversity, several countries are implementing measures to address gender diversity in the boardroom such as: Quotas. Some countries, such as Germany and Norway, among several other countries, have government quotas to increase the percentage of women on boards. For example, Germany requires that 30 percent of board seats at certain public companies be allocated for women and Norway requires that 40 percent be allocated for women. Disclosure policies. Other countries, such as Australia and Canada, have adopted “comply or explain” disclosure arrangements. Under such arrangements, if companies choose not to implement or comply with certain recommendations or government-suggested approaches related to board diversity— such as establishing a diversity policy—they must disclose why. Voluntary approaches. The United Kingdom has aimed to increase the representation of female directors through a voluntary, target-based approach rather than through the use of government-mandated interventions. As part of this effort, the government worked with leading companies, investors, and search firms to encourage the adoption of a set of recommendations to increase representation of women on boards. These recommendations included, for example, that certain companies achieve a minimum of 25 percent women on boards by 2015 and publicly disclose the proportion of women on the company’s board, management, and workforce. In addition, executive search firms were encouraged to draw up a voluntary code to address gender diversity and best practices covering relevant search criteria for board directors. Selected search firms in the United Kingdom have entered into a voluntary Code of Conduct to address gender diversity on boards in their search processes, including trying to ensure that at least 30 percent of proposed candidates are women. Based on our analysis, we found that women’s representation on boards of companies in the S&P 1500 has increased steadily over the past 17 years, from about 8 percent in 1997 to about 16 percent in 2014. As figure 2 illustrates, part of what is driving this increase is the rise in women’s representation among new board directors—directors who joined the board each year. While the number of female board directors among S&P 1500 companies has been increasing, particularly in recent years, we estimated that it will likely take a considerable amount of time to achieve greater gender balance. When we projected the representation of women on boards into the future assuming that women join boards in equal proportion to men— a proportion more than twice what it currently is—we estimated it could take about 10 years from 2014 for women to comprise 30 percent of board directors and more than 40 years for the representation of women on boards to match that of men (see fig. 3). Appendix I contains more information about this projection. Even if every future board vacancy were filled by a woman, we estimated that it would take until 2024 for women to approach parity with men in the boardroom. Using 2014 data, we also found that women’s representation on boards differed by company size and industry (see fig. 4) and that there were differences in certain characteristics between male and female directors, such as age and tenure (see fig.5). Based on our interviews with stakeholders, analysis of ISS board director data, and our review of relevant literature, we identified various factors that may hinder increases in women’s representation on corporate boards: boards not prioritizing diversity in recruitment efforts; lower representation of women in the traditional pipeline for board positions; and low turnover of board seats. Several stakeholders we interviewed suggested boards not prioritizing diversity in identifying and selecting directors is a factor affecting gender diversity on corporate boards. Specifically, 9 of the 19 stakeholders we interviewed cited board directors’ tendencies to rely on their personal networks to identify new board candidates as a factor that contributes to women’s lower representation. For example, three of the nine stakeholders specifically noted that men tend to network with other men, and given that the majority of board directors are men, this may prevent women from obtaining vacant board seats. Furthermore, 8 of the 19 stakeholders suggested unconscious bias may be a factor affecting the selection of women onto boards. Several stakeholders we interviewed discussed board directors’ desire to maintain a certain level of comfort in the boardroom. For example, one stakeholder observed that boards may have a tendency to seek other directors who look and sound like they do. Another noted that boards want to ensure new members “fit in” which may lead them to recruit people they know and can limit gender diversity on boards. We found some indication that boards’ appointment of women slows down when they already have one or two women on the board. In 2014, 29 percent of companies in the S&P 500 that had no women on the board added a woman; 15 percent of companies that had one woman on the board added a woman; and 6 percent of companies that had two women on the board added a woman. Small and medium-sized companies generally followed the same pattern. Further, three stakeholders we interviewed specifically suggested that boards may add a “token” woman to appear as though they are focused on diversity without making diversity a priority. Eleven of the 19 stakeholders we interviewed highlighted the low representation of women in the traditional pipeline for board seats—with either CEO or board experience—as another factor affecting the representation of women on boards. According to recent reports, current and former CEOs composed nearly half of new appointments to boards of Fortune 500 companies in 2014, and 4 percent of CEOs in the S&P 1500 in 2014 were women. One CEO we interviewed said that as long as boards limit their searches to the pool of female executives in the traditional pipeline, they are going to have a hard time finding female candidates. Another factor that may help explain why progress for women has been slow and greater gender balance could take time is that boards have only a small number of vacant seats each year. Based on our analysis, we found that board turnover has remained relatively consistent since 1998, with 4 percent of seats in the S&P 1500 filled, on average, by new board directors each year. In 2014, we found that there were 614 new board directors out of 14,064 seats among all companies in the S&P 1500. Seven of the 19 stakeholders we interviewed similarly cited low turnover, in large part due to the long tenure of most board directors, as a barrier to increasing women’s representation on boards. Based on relevant literature and discussions with researchers, organizations, and institutions knowledgeable about corporate governance and board diversity, we identified a number of potential strategies for increasing gender diversity on corporate boards (see table 1). While the stakeholders we interviewed generally agreed on the importance of diverse boards many noted that there is no one-size-fits- all solution to addressing diversity on boards and highlighted advantages and disadvantages of various strategies for increasing gender diversity on corporate boards. Potential strategies for encouraging or incentivizing boards to prioritize and address gender diversity as part of their agenda could include: Requiring a diverse slate of candidates to include at least one woman. Eleven stakeholders we interviewed supported boards requiring a gender diverse slate of candidates.Two specifically suggested that boards should aim for slates that are half women and half men. Two of the 11 advocated that boards include more than one woman on a slate of candidates, expressing concern that a board policy requiring that only one woman be included on a slate could lead to tokenism. This was also a concern for three of the five stakeholders who did not support this strategy. Setting voluntary targets. Ten stakeholders we interviewed supported boards setting voluntary diversity targets with two stakeholders citing the importance of having targets or internal goals for monitoring progress. Four stakeholders opposed voluntary targets. For example, one stakeholder thought that boards should consider a diverse slate of candidates but expressed concern over how voluntary diversity targets would work in the context of considering board candidates’ skills. Potential strategies for recruiting more female candidates on to boards could include: Expanding board searches. Of the 17 stakeholders who expressed an opinion, all supported expanding board searches beyond the traditional pool of CEO candidates to increase representation of women on boards. Several stakeholders suggested, for example, that boards recruit high performing women in other senior executive level positions, or look for qualified female candidates in academia or the nonprofit and government sectors. According to aggregate Employer Information Report (EEO-1) data, roughly 29 percent of all senior level managers in 2013 were women, suggesting that if boards were to expand their director searches beyond CEOs more women might be included in the candidate pool. Our analysis of EEO-1 data also found that at the largest companies—those with more than 100,000 employees—women comprised 38 percent of all senior-level managers in 2013, up from 26 percent in 2008. In addition, a few stakeholders said boards need to be more open to appointing women who have not served on boards before. One board director said individuals are more likely to be asked to serve on additional boards once they have prior board experience and have demonstrated they are trustworthy. Potential strategies that boards could implement to address the small number of new directors that are appointed to boards each year could include: Expanding board size. Nine stakeholders we interviewed expressly supported expanding board size either permanently or temporarily to include more women, with five specifically supporting this strategy only as a temporary measure.For example, one stakeholder’s board temporarily expanded in size from 8 directors to 11 in anticipation of retirements, but the stakeholder was not in favor of permanently expanding the board size. Some stakeholders noted that expanding board size might make sense if the board is not too large but expressed concern about challenges associated with managing large boards. Three stakeholders were not in favor of expanding board size permanently or temporarily to increase the representation of women on boards. Adopting term limits or age limits. Five stakeholders we interviewed supported boards adopting either term or age limits to address low turnover and increase the representation of women. However, most stakeholders were not in favor of these strategies and several pointed out trade-offs to term and age limits. For example, a CEO we interviewed said he would be open to limitations on tenure for board directors, especially as the board appoints younger candidates. However, he said directors with longer tenure possess invaluable knowledge about a company that newer board directors cannot be expected to possess. Many of the stakeholders not in favor of these strategies noted that term and age limits seem arbitrary and could result in the loss of high-performing directors. Conducting board evaluations. Twelve stakeholders we interviewed generally agreed it is good practice to conduct full-board or individual director evaluations, or to use a skills matrix to identify gaps. However, a few thought evaluation processes could be more robust or said that board dynamics and culture can make it difficult to use evaluations as a tool to increase turnover by removing under- performing directors from boards. The National Association of Corporate Directors encourages boards to use evaluations not only as a tool for assessing board director performance, but also as a means to assess boardroom composition and gaps in skill sets. Several stakeholders we interviewed discussed how it is important for boards to identify skills gaps and strategically address them when a vacancy occurs, and one stakeholder said doing so may help the board to think more proactively about identifying diverse candidates. In addition, almost all of the stakeholders we interviewed (18 of 19) indicated that either CEOs or investors and shareholders play an important role in promoting gender diversity on corporate boards. For example, one stakeholder said CEOs may encourage boards to prioritize diversity efforts by “setting the tone at the top” of companies and acknowledging the benefits of diversity. In addition, several stakeholders said that CEOs may serve as mentors for women and sponsor, or vouch for, qualified women they know for board seats. One stakeholder we interviewed developed a program to help women in senior management positions become board-ready and has also recommended qualified women when he was asked to serve on the board of other companies. Nearly all of the stakeholders we interviewed (18 of 19) said that investors play an important role in promoting gender diversity on corporate boards. For example, almost all of the board directors and CEOs we interviewed said that investors or shareholders may exert pressure on the companies they invest in to prioritize diversity when recruiting new directors. According to one board director we interviewed, boards listen to investors more than any other actor, and they take heed when investors bring attention to an issue. While most stakeholders we interviewed emphasized their preference for voluntary efforts by business to increase gender diversity on corporate boards over government mandates such as quotas, several large public pension fund investors and many stakeholders we interviewed (15 of 19) supported improving federal disclosure requirements on board diversity. Stakeholders were generally supportive of the government undertaking efforts to raise awareness about gender diversity on boards or to collect and disseminate information on board diversity. Most stakeholders we interviewed (16 of 19), however, did not support government quotas as a strategy to increase board gender diversity in the United States. Several suggested that quotas may have unintended consequences—boards may strive to meet the quota, but not to exceed it; boards may appoint directors who are not the best fit for the board just to meet the quota; and there may be the perception that women did not earn their board seat because of their skills, but instead were appointed for purposes of meeting a requirement. However, a few stakeholders and other organizations and researchers we interviewed stated that quotas are an effective means of achieving increased representation or that the prospect of quotas may spur companies to take voluntary actions to address gender diversity on boards. While the SEC seeks to ensure that companies provide material information to investors that they need to make informed investment and voting decisions, we found information companies disclose on board diversity is not always useful to investors who value this information. According to SEC’s 2014-2018 Strategic Plan, one of the Commission’s objectives is to structure disclosure requirements to ensure that investors have access to useful, high-quality disclosure materials that facilitate informed investment decision-making. The SEC notes in its strategic plan that it is helpful for information to be provided in a concise, easy-to- use format tailored to investors’ needs. In addition, the SEC acknowledges that the needs of investors may vary and that investors’ needs are affected by their backgrounds and goals. Several large public pension fund investors and many of the stakeholders we interviewed (12 of 19) called into question the usefulness of information companies provide in response to SEC’s current disclosure requirements. Specifically, in a recent petition to the SEC (investor petition) to improve board nominee disclosure, a group of nine public fund fiduciaries supervising the investment of over $1 trillion in assets stated that some companies have used such broad definitions of diversity that the concept conveys little meaning to investors. In its requirements for company disclosure on board diversity, SEC leaves it up to companies to define diversity in ways they consider appropriate. As a result, there is variation in how much information companies provide in response to the requirements as well as the type of information they provide. A recent analysis of S&P 100 firms’ proxy statements from 2010 through 2013 found that most of the companies chose to define diversity to include characteristics like relevant knowledge, skills, and experience. Approximately half of the companies reported defining diversity to include demographic factors such as gender, race, or ethnicity. Figure 6 illustrates the range of information companies provide on board diversity. For example, Company A and Company D provide information on demographic diversity and specifically disclosed the number of women on the board; Company C combined information on gender diversity with other demographic information; and Company B did not provide any numerical information on demographic characteristics, including gender diversity. Furthermore, SEC’s requirement for companies to disclose information related to a board policy for considering diversity in the nomination process, if they have such a policy, may not yield useful information. For example, the recent analysis of S&P 100 firms’ proxy statements previously mentioned found that 8 of the 100 companies reviewed disclosed the existence of a diversity policy in 2010 through 2013. In addition, according to the analysis, a substantial number of companies disclosed the absence of a policy or were silent on the topic. According to SEC’s requirements, if a board does have a policy, then it must provide additional information on how the policy is implemented and assessed, leading some investors and others we interviewed to question whether it creates a disincentive for companies to disclose a policy. The investor petition to the SEC supported improving existing disclosure requirements and requested that the SEC require new disclosures on board diversity specifically to indicate directors’ gender, racial, and ethnic diversity in a chart or matrix in addition to their skills and experiences. Those who submitted the investor petition believe there are benefits to diverse boards, such as better managing risk and including different viewpoints, and that having more specific information on individual director diversity attributes is necessary for investors to fully exercise their voting rights. They said that as large investors, they have an interest in electing a slate of board directors who are well-positioned to help carry out a company’s business strategy and meet their long-term investment needs, and that for at least some investors, demographic diversity is an important factor to consider when electing board directors. Most of the 19 stakeholders we interviewed (15 of 19) also supported improving SEC rules to require more specific information from public companies on board diversity. In addition to increasing transparency, some organizations and researchers we interviewed highlighted that disclosing information on board diversity may cause companies to think about diversity more and thus may be a useful strategy for increasing pressure on companies to diversify their boards. Twelve stakeholders we interviewed explicitly supported SEC requiring companies to specifically disclose the number of women on the board; five others were not opposed to disclosing this information; and two questioned whether this specificity was necessary as companies already include the names of board directors in their proxy statements or may include photos of directors. While the investor petition acknowledged that some companies provide aggregate board diversity information on gender and race, they said diversity information at the board level is not available for all companies. They also stated that it can be difficult to determine gender diversity through proxy statements and is time-consuming to collect this information on their own. Without specific information on board diversity that is concise and easy-to-use, investors may not be fully informed in making decisions. SEC officials told us they intend to consider the investor petition requesting changes to board diversity disclosure as part of its Disclosure Effectiveness Initiative—an ongoing review of all SEC disclosure requirements to improve them for the benefit of companies and investors. SEC’s review of its disclosure requirements provides an opportunity for the agency to solicit broader input on making specific changes to the disclosure requirements on board diversity. We provided a draft copy of this report to the Securities and Exchange Commission and the Equal Employment Opportunity Commission for review and comment. SEC staff provided technical comments that we incorporated, as appropriate. EEOC did not have comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Chair of the Securities and Exchange Commission, the Chair of the Equal Employment Opportunity Commission, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff should have any questions about this report, please contact me at (202) 512-7215 or sherrilla@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To identify trends in women’s representation on corporate boards and characteristics of male and female board directors, we analyzed a dataset from Institutional Shareholder Services, Inc. (ISS) that contained information about individual board directors at each company in the S&P Composite 1500 from 1997 through 2014, the years for which they collected these data. The ISS data include publicly available information on directors compiled from company proxy statements and other U.S. Securities and Exchange Commission (SEC) filings. The data include information such as gender, age, committee memberships, race and ethnicity, and other characteristics. To determine the reliability of the ISS data, we compared it to other analyses of women’s board representation to see if our results were comparable, interviewed knowledgeable ISS employees and other researchers who have used ISS data, and conducted electronic testing of the data. In cases where we did find discrepancies in the data, we discussed the issue with ISS employees and either resolved the issue or determined the specific data element was not sufficiently reliable for our analysis and excluded it from our review. Based on our assessment of the reliability of the ISS data generally and of data elements that were critical to our analyses, we determined that they were sufficiently reliable for our analyses. We used ISS data to provide descriptive statistics on characteristics of male and female board directors, including comparing the age and tenure of female board directors to males, and we also presented information on the representation of women by company size and industry. The ISS data divided companies into the S&P 500 (large cap companies), S&P 400 (mid cap companies) and S&P 600 (small cap companies), which enabled us to conduct analyses by company size. The companies that comprise these indices, including the composite S&P 1500, may change each year depending on the value of the company at the time the index is established. Thus, our analysis is a point in time estimate for the index based on the indices as they were in a certain year. The ISS data did not include industry or sector for the companies in the dataset. We used data from the Bloomberg Industry Classification System to identify the industries for the companies in the ISS dataset by matching up the stock market ticker. We were able to make these matches for 96 percent of the director observations in the ISS data. When we could not make a match, it was typically because we could not locate the ticker in the Bloomberg data. This could be the case, for example, if a company experienced a merger or dismantled. In addition to presenting past trends and descriptive statistics on board membership, we used the ISS data to determine the likelihood of a board adding a woman based on the number of women already on the board. Specifically, we computed how the percentage of boards that have added a woman changes with the number of women already on the board. To do this, we determined the proportion of companies with 0, 1, or 2 women on the board that added a woman in that year. While we did not control for other factors, such as industry, we did do this analysis separately for large, medium, and small firms. We also developed two hypothetical projections to illustrate future gender representation on corporate boards. Neither of these projections is meant to be predictive of what will happen over the coming decades. In one scenario, we assumed an equal proportion of men and women join boards each year starting in 2015. In the second scenario, we assumed only women join boards as new board directors beginning in 2015. For both projections, we made the following assumptions based on ISS data on directors in the S&P 1500 from 1997 through 2014: The total number of board directors in the S&P 1500 will stay constant at 14,000 each year, based on the average of the total number of board directors in the S&P 1500 in 2013 and 2014. The total number of new board directors joining companies in the S&P 1500 will stay constant at 600 new directors each year, which is the total average number of new board directors joining companies in the S&P 1500 for the years of our analysis. We used 600 as an indicator of the number of board directors leaving their board positions each year. Women on boards tend to be younger than men and have had less tenure in 2014. Therefore, we wanted to assume that women leave the board at a slightly lower rate than men, so we estimated the proportion of women among the 600 departing board directors in each year would equal the proportion of women who were on boards 10 years prior (when women were less represented). In addition to the affiliations above, the CEOs and board directors we interviewed collectively have experience serving at companies in a wide range of industries, including the following: Agilent Technologies, Inc. Avaya Avon Products, Inc. Inc. Companies Inc. eHealth, Inc. Engility Holdings, Inc. Exelixis, Inc. Integrated Device Technology Time Inc. TJX Companies, Inc. UNUM Corporation Walmart (IDT) ION Media Juno Therapeutics, Inc. Kohl’s Corporation Kraft Foods, Inc. Westinghouse Xerox Corporation Yahoo! Inc. In addition to the contact named above, Clarita Mrena (Assistant Director), Kate Blumenreich (Analyst-in-Charge), Ben Bolitzer, and Meredith Moore made significant contributions to all phases of the work. Also contributing to this report were James Bennett, David Chrisinger, Kathy Leslie, James Rebbe, Walter Vance, and Laura Yahn.
Women make up almost half of the nation's workforce, yet research shows that they continue to hold a lower percentage of corporate board seats compared to men. Research highlights advantages to gender diverse boards, and some countries have taken steps to increase board gender diversity. The SEC requires companies to disclose certain information on board diversity. GAO was asked to review the representation of women on U.S. corporate boards. This report examines (1) the representation of women on boards of U.S. publicly-traded companies and factors that may affect it and (2) selected stakeholders' views on strategies for increasing representation of women on corporate boards. GAO analyzed a dataset of board directors at companies in the S&P 1500 from 1997 through 2014; and conducted interviews with a nongeneralizable sample of 19 stakeholders including CEOs, board directors, and investors. GAO selected stakeholders to reflect a range of experiences, among various factors. GAO also reviewed existing literature and relevant federal laws and regulations. GAO is not making recommendations in this report. SEC provided technical comments that were incorporated, as appropriate. The Equal Employment Opportunity Commission had no comments. Representation of women on the boards of U.S. publicly-traded companies has been increasing, but greater gender balance could take many years. In 2014, women comprised about 16 percent of board seats in the S&P 1500, up from 8 percent in 1997. This increase was partly driven by a rise in women's representation among new board directors. However, even if equal proportions of women and men joined boards each year beginning in 2015, GAO estimated that it could take more than four decades for women's representation on boards to be on par with that of men's. Based on an analysis of interviews with stakeholders, board director data, and relevant literature, GAO identified various factors that may hinder women's increased representation among board directors. These include boards not prioritizing recruiting diverse candidates; few women in the traditional pipeline to board service—with Chief Executive Officer (CEO) or board experience; and low turnover of board seats. Stakeholders GAO interviewed generally preferred voluntary strategies for increasing gender diversity on corporate boards, yet several large investors and most stakeholders interviewed (15 of 19) supported improving Securities and Exchange Commission (SEC) disclosure requirements on board diversity. SEC currently requires companies to disclose information on board diversity to help investors make investment and voting decisions. As stated in its strategic plan, one of SEC's objectives is to ensure that investors have access to high-quality disclosure materials to inform investment decisions. A group of large public pension fund investors and many stakeholders GAO interviewed questioned the usefulness of information companies provide in response to SEC's board diversity disclosure requirements. Consequently, these investors petitioned SEC to require specific disclosure on board directors' gender, race, and ethnicity. Without this information, some investors may not be fully informed in making decisions. SEC officials said they plan to consider the petition as part of an ongoing effort to review all disclosure requirements.
In determining whether to provide testing accommodations, testing companies are required to adhere to Section 309 of the ADA and, in some circumstances, Section 504 of the Rehabilitation Act of 1973, as amended (the Rehabilitation Act), as well as regulations implementing those laws. Section 309 of the ADA provides that “ny person that offers examinations or courses related to applications, licensing, certification, or credentialing for secondary or post-secondary education, professional, or trade purposes” must offer them “in a place and manner accessible to persons with disabilities or offer alternative accessible arrangements…” Section 504 prohibits discrimination against individuals with disabilities by entities receiving federal financial assistance. Persons requesting accommodations are entitled to them only if they have a disability as defined by those statutes. Both the ADA and the Rehabilitation Act define individuals with disabilities as those who have a physical or mental impairment that substantially limits one or more major life activities, have a record of such impairment, or are regarded as having such an impairment. Justice is charged with enforcing testing company compliance within Section 309 of the ADA, and the Departments of Education and HHS are responsible for enforcing compliance with Section 504 of the Rehabilitation Act for any testing companies that receive federal financial assistance from them. In 2008, concerned that judicial interpretations had limited the scope of protection it had intended under the ADA, Congress enacted the ADA Amendments Act of 2008 (ADAAA), rejecting several Supreme Court interpretations that had narrowed the definition of an individual with disabilities. The ADA Amendments Act set out guidelines for determining who qualifies as an individual with disabilities and provided a nonexhaustive list of “major life activities,” which includes learning, reading, concentrating, and thinking. In the ADAAA, Congress also stated that it found the U.S. Equal Employment Opportunity Commission (EEOC) regulation regarding the definition of an individual with a disability inconsistent with congressional intent and directed the EEOC to revise that regulation. On March 25, 2011, the EEOC issued final regulations, implementing Title I of the ADAAA. Those regulations, which went into effect on May 24, 2011, provide that the term “substantially limits” should be construed broadly in favor of expansive coverage to the maximum extent permitted by the ADA and is not meant to be a demanding standard; that when determining if an individual is substantially limited in performing a major life activity, the determination of disability should not require extensive analysis and should be compared with that of “most people in the general population;” and that the comparison to most people will not usually require scientific, medical, or statistical analysis. The regulations provide that, in applying these principles, it may be useful to consider, as compared with most people in the general population, the condition under which the individual performs the major life activity; the manner in which the individual performs the major life activity; and/or the duration of time it takes the individual to perform the major life activity.” In 1991, Justice issued regulations implementing Section 309 which, among other things, provide that any private entity offering an examination must assure that “he examination is selected and administered so as to best ensure that, when the examination is administered to an individual with a disability that impairs sensory, manual, or speaking skill, the examination results accurately reflect the individual’s aptitude, achievement level or whatever other factor the examination purports to measure, rather than reflecting the individual’s impaired sensory, manual or speaking skills…..” Under the regulations, such entities are also required to provide individuals with disabilities appropriate auxiliary aids unless the entity can demonstrate that a particular auxiliary aid would fundamentally alter what the examination is intended to measure or would result in an undue burden. On September 15, 2010, Justice issued a final rule adding three new provisions to its regulations, stating that, through its enforcement efforts, it had addressed concerns that requests by testing entities for documentation regarding the existence of an individual’s disability and need for accommodations were often inappropriate and burdensome. The first new provision requires that documentation requested by a testing entity must be reasonable and limited to the need for the accommodation. The second new provision states that a testing entity should give considerable weight to documentation of past accommodations received in similar testing situations, as well as those provided under an Individualized Education Program (IEP) provided under the Individuals with Disabilities Education Act (IDEA), or a plan providing services pursuant to Section 504 of the Rehabilitation Act (a Section 504 plan). The third new provision provides that a testing entity must respond to requests for accommodation in a timely manner. Since the ADAAA and EEOC regulations have broadened the definition of an individual with disabilities, it is possible that the focus for determining eligibility for testing accommodations will shift from determining whether a person requesting testing accommodations is an individual with a disability for purposes of the ADA to what accommodations must be provided to meet the requirements of Section 309 and its implementing regulations. Several recent cases that address the type of accommodations that must be provided under Section 309 will likely impact the latter determination. In Enyart v. National Conference of Bar Examiners, the U.S. Court of Appeals for the Ninth Circuit rejected the argument that Section 309 requires only “reasonable accommodations” and adopted the higher “best ensure” standard for determining accessibility that Justice included in its regulations. The court found that the requirement in Section 309, that testing entities offer examinations in a manner accessible to individuals with disabilities, was ambiguous. As a result, it deferred to the requirement in Justice’s regulations providing that testing entities must offer examinations “so as to best ensure” that the exam results accurately reflect the test takers aptitude rather than disabilities. Applying that standard, the court found that NCBE was required to provide Enyart, a blind law school graduate, with the accommodations she had requested rather than the ones offered by NCBE based on evidence that her requested accommodations were necessary to make the test accessible to her given her specific impairment and the specific nature of the exam. Extra time represented approximately three-quarters of all accommodations requested and granted in the most recent testing year, with 50 percent extra time representing the majority of this category (see fig. 1). According to researchers, one explanation for the high incidence of this accommodation is that students with the most commonly reported disabilities—learning disabilities, such as dyslexia; attention deficit disorder (ADD); or attention deficit/hyperactivity disorder (ADHD)—may need extra time to compensate for slower processing or reading speeds. In addition, extra time may be needed to support other accommodations, such as having a person read the test to a test taker or write down the responses. The remaining quarter of accommodations that students requested and testing companies granted in the most recent testing year include changes in the testing environment, extra breaks, alternate test formats, and auditory or visual assistance. Changes to the testing environment might involve preferential seating or testing in a separate room to minimize distractions. The accommodation of extra breaks could be an extension of the scheduled break time between test sections or breaks when needed, depending on students’ individual circumstances. For example, students might need more than the allotted break time if they have a medical condition that requires them to test their blood sugar or use the restroom. Requests for auditory or visual assistance might entail having a “reader” to read the test aloud, whereas alternate test formats include large type, Braille, or audio versions. Additionally, students requested some other types of accommodations, including being allowed to have snacks as needed or using various types of assistive technology to take the test, such as computer software to magnify text or convert it into spoken language. For example, one blind individual we interviewed described using Braille to take tests and screen reading software to complete assignments when she was an undergraduate student. When it came time to request accommodations for a graduate school admissions test, she requested use of screen reading software because it helps her read long passages more quickly than with Braille alone. However, she also requested use of Braille because it allows her to more closely study a passage she did not initially comprehend. Students and disability experts we spoke with also told us that students may need multiple accommodations to help them overcome their disabilities, and that their requests reflect the accommodations that have previously worked for them. For example, in addition to using screen reading software and Braille, the blind student mentioned above was also allowed extra time, use of a computer, breaks in between test sections, a scribe, and a few other accommodations. An estimated 179,000 individuals with disabilities—approximately 2 percent—of about 7.7 million test takers took an exam with an accommodation in the most recent testing year, according to data provided to us. Approximately half of all accommodations requested and granted were for applicants with learning disabilities, and one-quarter was for those with ADD or ADHD. The remainder of accommodations requested and granted was for applicants with physical or sensory disabilities, such as an orthopedic or vision impairment; or psychiatric disabilities, such as depression; and other disabilities, such as diabetes and autism spectrum disorders (see fig. 2). High schools help students apply for accommodations on undergraduate admissions tests in several ways. According to disability experts and a few high schools we interviewed, school counselors alert students to the need to apply for accommodations and advise them about what to request. Additionally, school officials play an important role in helping students with the application. For certain types of requests, school officials can submit the application on the student’s behalf, requiring minimal student involvement. One testing company reported that 98.5 percent of new accommodation requests for a postsecondary admissions test were submitted this way in the most recent testing year. Alternatively, when students submit the application themselves, school officials can provide copies of the disability documentation on file with the school. In addition to helping students with the application process itself, high school officials can also facilitate communications between the student and testing company after the application has been submitted. For example, one high school administrator we interviewed reported contacting a testing company about an accommodation application that had been submitted past the deadline for a specific test date. In this case, the student’s recent health diagnosis and treatment necessitated accommodations, and the administrator helped explain why it was important for the student to take the test when originally scheduled. Postsecondary Schools’ Services for Students with Disabilities Postsecondary schools provide an array of services to help ensure that students have equal access to education. School officials we interviewed work closely with students who self-identify as having a disability and request services to provide accommodations, coordinate with faculty and campus services, meet periodically with students to monitor their progress, and adjust accommodations as necessary. Schools are required to identify an individual who coordinates the school’s compliance with the Rehabilitation Act and the ADA. Some schools also have a centralized disability services office to coordinate these services. The transition from high school to postsecondary school can present challenges for all students, and especially for students with disabilities because they must assume more responsibility for their education by identifying themselves as having a disability, providing documentation of their disability, and requesting accommodations and services. For example, students must decide whether or not to use accommodations in their postsecondary courses and, if needed, obtain any new documentation required to support a request for accommodations. Consequently, postsecondary schools play an important role in advising students with disabilities to help them achieve success both in school and when applying for testing accommodations. Generally, when postsecondary students apply for testing accommodations, school officials provide a letter documenting the accommodations students have used in school. In addition to providing these letters, postsecondary officials we interviewed described several ways they advise students who apply for testing accommodations, including the following:  Counseling students about what accommodations best meet their needs—Postsecondary school officials play an important role in helping students adapt to the new academic environment and in determining the best accommodations to use in school and for standardized tests to achieve success at this level. For example, at one postsecondary school, a committee consisting of two learning specialists, a psychologist, two administrative staff, and the director of the disability services office meet to review each student’s request for accommodations and discuss the appropriate services to provide for his or her courses. With technological advances, an official at another school has advised some students to reconsider requesting the accommodation of extra time as they may be better served by other accommodations, such as screen readers, to address their disability. According to the official, using certain technologies has decreased the need for extra time for some students as they have been able to complete more of their work on time.  Explaining application requirements—Postsecondary school officials advise students about the need to apply for testing accommodations and help them understand application requirements, which can be extensive. For example, several postsecondary officials we interviewed said they alerted students to the need to apply for testing accommodations and to allow sufficient time for the application process. One official reported sending reminders to students about the need to apply for accommodations if they are considering graduate school, and another official reported advising students to begin the process 4 to 6 months in advance, in case the testing company requests additional information. Another school official described helping a student interpret the testing company’s instructions for the accommodation application, including what documentation is required. One school official said that she helps students understand more subtle aspects of preparing a successful application by, for example, recommending the use of consistent terminology to describe the disability throughout the application to make it easier for reviewers to understand. Several postsecondary officials we interviewed reported advising students about the likelihood of a testing company granting accommodations based on a review of their existing documentation. For example, a psychoeducational evaluation that was current when a student enrolled in postsecondary study might need to be updated by the time a student applies for testing accommodations. At one school, an official estimated that about 30 percent of the students served by the school’s disability service office would need to update their documentation if they decide to apply for testing accommodations.  Providing resources to obtain evaluations—A few postsecondary officials we interviewed reported referring students to a variety of resources when they need an updated or new evaluation, sometimes at substantial savings to the student. Two schools we interviewed make campus resources available to students, such as providing grants or scholarships to help students who demonstrate financial need to offset the cost of evaluations. Schools also reported helping students by providing a mechanism for them to obtain the necessary evaluations on campus. For example, students can obtain an evaluation from the campus health and counseling center at one school for about $700, while the psychology clinic and the Department of Neuropsychology at another school provide these evaluations on a sliding fee basis. Additionally, officials said that they provide students with a list of area professionals who conduct evaluations, although such outside sources could cost several thousand dollars and may not be covered by health insurance. In reviewing requests for accommodations, testing companies included in our study reported considering a number of factors to determine whether applicants have a disability that entitles them to accommodations under the ADA. As part of their review process, the testing companies included in our study typically look for a current disability diagnosis made by a qualified professional. However, seven testing companies included in our study either state in their guidance for requesting accommodations or told us that the presence of a disability diagnosis does not guarantee an accommodation will be granted because they also need to consider the impact of the disability. Testing companies included in our study reported reviewing applications to understand how an applicant’s current functional limitations pose a barrier to taking the exam under standard conditions. As an example, one testing company official stated that someone with limited mobility might meet the ADA definition of a disability but not need an accommodation if the testing center is wheelchair accessible. To understand an applicant’s current functional limitations, testing companies may request documentation that provides evidence of how an applicant’s disability currently manifests itself, such as the results of diagnostic tests. For example, several testing companies included in our study request that applications for accommodations include the results of a psychoeducational test to support a learning disability diagnosis. As another example, applicants who have a hearing impairment would be asked to provide the results of a hearing test to document their current condition. Officials from most testing companies included in our review said that, for some types of disabilities, it is important to have documentation that is current to help them understand the functional limitations of an applicant’s disability. For example, one testing company official told us that disabilities of an unchanging nature, such as blindness or deafness, could be documented with evaluations from many years ago, whereas psychiatric conditions, learning disabilities, and ADHD would need more current evaluations. For applicants who may not have a formal disability diagnosis or recent medical evaluations, some testing company officials told us that they will look at whatever information applicants can provide to show how they are limited. For example, testing company officials said they will consider report cards or letters from teachers to obtain information about an applicant’s condition. Another factor that several testing companies consider is how an applicant’s functional ability compares to that of most people. For example, officials from one testing company told us that before granting an accommodation on the basis of a reading-related disability, they would review the applicant’s reading scores to make sure they were lower than those of the average person. Several testing company officials also told us that while reviewing information within an application for accommodations, they may reach a different conclusion about an applicant’s limitations and necessary accommodations than what the applicant requested. For example, one testing company initially denied an applicant’s request, in part, because the testing company’s comparison of the applicant’s diagnostic test scores with the average person his age led them to different conclusions about the applicant’s ability to function than those of the medical evaluator who performed the tests. As described previously, Justice recently added new requirements to its Section 309 regulations to further define the parameters of appropriate documentation requests made by testing companies in reviewing requests for accommodations. One of those amendments provides that a testing entity should give considerable weight to documentation of past accommodations received in similar testing situations, as well as those provided under an IEP or Section 504 plan. In discussing the regulations, most testing company officials we spoke with told us that they consider an applicant’s history of accommodations; however, they also told us they may require more information to make a decision. For example, officials from one testing company said they may want information, such as documentation from a medical professional and a personal statement from the applicant, to explain the need for the accommodation if it had not been used previously or in recent years. In guidance on its revised regulations, Justice states that when applicants demonstrate a consistent history of a diagnosis of a disability, testing companies generally should accept without further inquiry documentation provided by a qualified professional who has made an individualized assessment of the applicant and generally should grant the requested accommodation. Testing company officials also told us they sometimes ask for more information than provided by a licensed professional in order to understand an applicant’s disability and limitations. For example, for certain disabilities, such as learning disabilities or ADHD, officials from two testing companies told us they may request evidence dating back to childhood since these disabilities are considered developmental. While Justice states in its guidance that the amendments to the regulation were necessary because its position on the bounds of appropriate documentation had not been implemented consistently and fully by testing entities, officials from almost all of the testing companies included in our study stated that they did not need to change any of their practices for granting accommodations to be in compliance. Testing companies included in our study also consider what accommodations are appropriate for their tests. In doing so, some testing company officials told us that they may grant an accommodation that is different from what an applicant requested. Based on their assessment of how an applicant is limited with respect to the exam, testing company officials told us they make a determination as to which accommodations they believe will address the applicant’s limitations. For example, one testing company official told us that three applicants with ADHD all might apply for extra time to complete the exam, but the testing company may decide different accommodations are warranted given each applicant’s limitations––extra time for an applicant unable to maintain focus; extra breaks for an applicant who has difficulty sitting still for an extended time period; preferential seating for the applicant who is easily distracted. Even though one testing company official told us that evidence of a prior history of accommodations can be helpful in understanding how accommodations have been used in the past, having a history of prior accommodations in school does not guarantee that those accommodations will be appropriate for the test. For example, according to one testing company, some students with hearing impairments who need accommodations such as a note taker in school may not need accommodations on a written standardized test. In reviewing requests for accommodations, several testing company officials told us they try to work with applicants when they do not grant the specific accommodations requested. For example, one testing company official told us that if an applicant has a qualifying disability and she could not grant the requested accommodation because it would alter the test, she will try to work with an applicant to determine an appropriate accommodation. In addition, all of the testing companies included in our study have a process by which applicants can appeal the decision if they disagree with the outcome. Based on their reviews, testing companies reported granting between 72 and 100 percent of accommodations that were requested in the most recent testing year for 6 of the 10 tests for which we received data. However, these testing companies counted an accommodation as granted even if it was different from what was requested. For example, testing companies told us that they would have counted an accommodation request for extra time as granted, even if the applicant requested more than what was granted. Some disability experts and applicants told us that one of the challenges in applying for accommodations was understanding how testing companies made their decisions, especially with relation to how much weight certain aspects of the application appeared to carry. Most of the applicants we spoke with told us that they requested accommodations that they were accustomed to using and were often frustrated that testing companies did not readily provide those accommodations. These applicants had gone through a process for requesting classroom accommodations and had documentation supporting those accommodations, and two applicants told us that they did not believe testing companies deferred to those documents in the way they would expect. Some disability experts expressed concern that testing companies rely heavily on scores that are perceived to be more objective measures, such as psychometric assessments, and two of these experts said they believe that, in addition to scores, testing companies should also consider the clinical or behavioral observations conducted by qualified professionals or school counselors. While testing companies provide guidance outlining their documentation requirements, some applicants and disability experts we spoke with told us that knowing what documentation to provide to a testing company can be a challenge in applying for accommodations. Two applicants told us it was unclear what and how much information to submit to support their requests. According to one of the applicants, the testing company asked for additional information to substantiate his request for additional time and a separate room to accommodate a learning disability, but was not specific about which documents it wanted or why. Four applicants told us they hired an attorney to help them determine what to submit in response to testing companies’ requests for additional information or to appeal a denial. According to one of the applicants, the attorney helped him find the right balance of documentation to submit to successfully obtain accommodations, something he was not able to do when he first applied without legal assistance. School officials we spoke with said documenting the need for an accommodation can be particularly challenging for gifted students ––those who demonstrate high levels of aptitude or competence—because they may not have a history of academic difficulty or accommodations. As a result, it can be more difficult to know what documentation to provide to support their requests. Disability experts and applicants also told us that, in some instances, they found testing companies’ documentation requirements on providing a history of the disability to be unreasonable. Two applicants told us that they found it unreasonable to be asked to provide a lengthy history of their disability. For example, one student we spoke with who was diagnosed with a learning disability in college provided the testing company with the results of cognitive testing and documentation of the accommodations he received in college, but the testing company also requested records of his academic performance going back to elementary school. He did not understand how such information was relevant to document his current functioning and found the request to be unreasonable since he was 30 years removed from elementary school. Some applicants also found it frustrating to have to update medical assessments for conditions that had not changed. For example, one applicant was asked to obtain a new evaluation of her disability even though school evaluations conducted every 3 years consistently showed that she has dyslexia. Applicants and disability experts we spoke with told us that obtaining these assessments can be cost prohibitive, and applicants reported costs for updating these assessments ranging from $500 to $9,000. For blind applicants, access to familiar assistive technology, such as screen-reading or screen-magnification software, was particularly challenging, according to applicants and disability experts. Two blind applicants told us they faced difficulty with being allowed to use the specific technology they requested for the test. One of the applicants told us the testing company required him to use its screen-reading software rather than the one he used regularly, resulting in greater anxiety on the day of the test since he had to learn how to use a new tool. Similarly, this applicant told us he faced similar challenges in working with readers provided by different testing companies rather than readers of his own choosing, since he was not comfortable with the reader’s style. While most of the applicants we spoke with eventually received one or more of their requested accommodations, several of them reported having to postpone their test date as a result of the amount of time the accommodations approval process took. Some applicants told us that they also experienced delays in achieving their educational or professional goals. Additionally, some applicants who were denied their accommodations told us that when they elected to take the test without accommodations, they felt that their exam results did not fully demonstrate their capabilities. For example, one applicant told us that he took a licensing exam a few times without the accommodations he requested over a two-year period while appealing the testing company’s decision, but each time his scores were not high enough for licensure nor did they reflect his academic performance. As a result, the applicant was two years behind his peers. Another applicant told us that she did not receive the requested accommodations for one of the licensing exams she applied for and decided not to take the exam for the time being because it wasn’t necessary for her to practice in the state she was living in. However, she anticipates needing to take the test as she furthers her career because the license will be needed for her to practice in surrounding states. Testing companies we interviewed reported challenges with ensuring fairness to all test takers when reviewing applications for accommodations. Officials from three testing companies expressed concern that some applicants may try to seek an unfair advantage by requesting accommodations they do not need. For example, officials from two of the companies said some applicants may see an advantage to getting an accommodation, such as extra time, and will request it without having a legitimate need. Officials from the other testing company told us that they do not want to provide accommodations to applicants who do not need them because doing so could compromise the predictive value of their tests and unfairly disadvantage other test takers. Officials from several testing companies told us that ensuring the reliability of their test scores was especially important since so many colleges, universities, and licensing bodies rely on them to make admissions and licensing decisions. Testing company officials told us that reviewing requests that contain limited information can be challenging because they do not have sufficient information to make an informed decision. One testing company official told us she received an accommodation request accompanied by a note on a doctor’s prescription pad that indicated the applicant had ADHD without any other information to document the applicant’s limitations on the test, thereby making it difficult to grant an accommodation. Officials from three testing companies also told us that an applicant’s professional evaluator may not have provided enough information to explain why the applicant needs an accommodation. They reported receiving evaluations without a formal disability diagnosis or evaluations with a diagnosis, but no information as to how the diagnosis was reached, leaving them with additional questions about the applicant’s condition. In addition, some testing company officials said it can be difficult to explain to applicants that having a diagnosis does not mean they have a qualifying disability that entitles them to testing accommodations under the ADA. One testing company official said she spends a great deal of time explaining to applicants that she needs information on their functional limitations in addition to a disability diagnosis. Testing company officials also told us that evaluating requests for certain types of disabilities or accommodations can be difficult. Some testing company officials told us that evaluating requests from gifted applicants or those with learning disabilities are among the most challenging. Such applicants may not have a documented history of their disability or of receiving accommodations, making it more difficult to determine their current needs. One testing company official told us that greater scrutiny is applied to requests from applicants without a history of accommodations because they question why the applicant was not previously diagnosed and suddenly requests accommodations for the test. Officials from two testing companies stated that determining whether to provide for the use of assistive technologies or certain formats of the test can be difficult. One testing company official stated that allowing test takers to use their own software or laptop might result in information, such as test questions, being left on a test taker’s computer, which could compromise future administrations of the test since some questions may be reused. The official from the other company stated that providing the exam in a nonstandard format may change the exam itself and make the comparability of scores more difficult. Officials from two testing companies and an attorney representing some of the testing companies included in our study also told us they have concerns about testing companies being required to provide accommodations that best ensure that applicants’ test results reflect the applicants’ aptitudes rather than their disabilities since they believe the ADA only requires testing companies to provide reasonable accommodations. In a brief filed by several testing companies and professional licensing boards supporting NCBE’s request that the Supreme Court review the Court of Appeals decision in the Enyart case, they stated that a “best ensure” standard would fundamentally alter how standardized tests are administered since they would have to provide whatever accommodation the test taker believes will best ensure his or her success on the test. They stated this would skew nationwide standardized test results, call into question the fairness and validity of the tests, and impose new costs on testing organizations. Federal enforcement of laws and regulations governing testing accommodations primarily occurs in response to citizen complaints that are submitted to federal agencies. While Justice has overall responsibility for enforcement of Title III of the ADA, which includes Section 309 that is specifically related to examinations offered by private testing companies, other federal agencies such as Education and HHS have enforcement responsibilities under the Rehabilitation Act for testing companies that receive federal financial assistance from them. Justice can pursue any complaints it receives alleging discrimination under the ADA, regardless of the funding status of the respondent, but Education and HHS can only pursue complaints filed against entities receiving financial assistance from them at the time the alleged discrimination occurred. Education and HHS provided financial assistance to 4 of the 10 testing companies included in our study in at least 1 of the 4 fiscal years included in our analysis, fiscal years 2007 to 2010. When Justice receives a complaint that alleges discrimination involving testing accommodations it may investigate the complaint, refer it to another federal agency that has jurisdiction, or close it with no further action. After Justice reviews the complaint at in-take, it advises complainants that it might not make a determination about whether or not a violation has occurred in each instance. Justice officials explained that the department does not have the resources to make a determination regarding each complaint given the large volume and broad range of ADA complaints the agency receives. Specifically, Justice’s Disability Rights Section of the Civil Rights Division reported receiving 13,140 complaints, opening 3,141 matters for investigation, and opening 41 cases for litigation related to the ADA in fiscal years 2007 to 2010. Due to the limitations of Justice’s data systems, it is not possible to systematically analyze Justice’s complaint data to determine the total related to testing accommodations. However, using a key word search, Justice identified 59 closed complaints related to testing accommodations involving 8 of the 10 testing companies included in our study for fiscal years 2007 to 2010. Based on our review of available complaint information, we found that Justice closed 29 complaints without action, 2 were withdrawn by the complainant, and 1 was referred to a U.S. Attorney. However, we were unable to determine the final disposition of 27 complaints given information gaps in Justice’s data systems and paper files. In addition to identifying closed complaints, Justice identified five closed matters related to testing accommodations for three of the testing companies included in our study for fiscal years 2007 to 2010. One of these resulted in a settlement with the testing company that would allow the complainant to take the exam with accommodations, two were closed based on insufficient evidence provided by the complainant, and the outcome of the remaining two could not be determined based on limited information in Justice’s files. Education and HHS officials told us they review each incoming complaint to determine whether it should be investigated further. For Education and HHS to conduct further investigations, the complaint must involve an issue over which the agencies have jurisdiction and be filed in a timely manner. Eligible complaints are then investigated to determine whether a testing company violated the Rehabilitation Act. Similar to Justice, Education did not track complaints specifically involving testing accommodations. However, Education was able to identify a subset of complaints related to testing accommodations for the testing companies included in our sample by comparing our list of testing companies against all of their complaints. For fiscal years 2007 to 2010, Education identified 41 complaints related to testing accommodations involving six of the testing companies included in our study. Based on a review of closure letters sent to complainants, we found that Education did not consider testing company compliance for most complaints. Specifically, Education determined that it did not have the authority to investigate 14 complaints involving testing companies that were not receiving federal financial assistance at the time of the alleged violation. Education closed 14 other complaints without making a determination about compliance because the complaint was not filed on time, was withdrawn, or involved an allegation pending with the testing company or the courts. Based on its investigation of the remaining 13 complaints, Education did not identify any instances in which testing companies were not in compliance with the Rehabilitation Act. HHS identified one complaint against a testing company included in our study, but it was withdrawn by the complainant prior to a determination being made. Justice’s regulations implementing Section 309 of the ADA provide the criteria for its enforcement efforts, and it has recently taken steps to clarify ADA requirements pertaining to testing accommodations by adding new provisions to regulations. In June 2008—prior to passage of the ADA Amendments Act, Justice issued a notice of proposed rulemaking, and issued final regulations in September 2010 following a public hearing and comment period. In issuing those regulations, Justice stated that it relied on its history of enforcement efforts, research, and body of knowledge regarding testing accommodations. Justice officials told us they added new provisions to the regulations based on reports—detailed in complaints and anecdotal information from lawyers and others in the disability rights community—that raised questions about what documentation is reasonable and appropriate for testing companies to request. The final regulations, which took effect in March 2011, added provisions clarifying that testing companies’ requests for documentation should be reasonable and limited to the need for the accommodation, that testing companies should give considerable weight to documentation showing prior accommodations, and they should respond in a timely manner to accommodations requests. Justice provided further clarification of these provisions in the guidance that accompanied the final rule. Since the final regulations took effect, Justice has also filed statements of interest in two recent court cases to clarify and enforce its regulations. In both of these cases, test takers with visual disabilities filed lawsuits seeking to use computer assistive technology to take a standardized test, rather than other accommodations that the testing company thought were reasonable, including Braille, large print, and audio formats. In these statements of interest, Justice discussed the background of the ADA and its regulations and stated that the accommodations offered to those test takers should be analyzed under the “best ensure” standard. Justice also pointed out that Congress intended for the interpretation of the ADA to evolve over time as new technology was developed that could enhance options for students with disabilities. In addition, Justice stated that it had made clear in regulatory guidance that appropriate auxiliary aids should keep pace with emerging technology. While these actions may help clarify what is required under the ADA, we found that Justice is not making full use of available data and other information to target its enforcement activity. For example, incoming complaints are the primary mechanism Justice relies on to focus its enforcement efforts, and it makes decisions on which complaints to pursue primarily on a case-by-case basis. However, Justice does not utilize information gathered on all its complaints to develop a systematic approach to enforcement that would extend beyond one case. Officials told us that the facts and circumstances of every complaint are unique, but that in determining whether to pursue a particular complaint, they consider a number of factors, including available resources and the merits of the complaint. Officials also said they may group complaints, for example, waiting until they receive a number of complaints related to the same testing company before deciding whether to pursue them. They also told us they may pursue a complaint if it highlights an aspect of the ADA that has not yet been addressed. For example, Justice officials told us the department investigated one recent complaint because it demonstrated how someone who was diagnosed with a disability later in life and did not have a long history of receiving classroom accommodations was eligible for testing accommodations under the ADA. While these may be the appropriate factors for Justice to consider in determining whether to pursue each individual complaint, we found that the agency has not given sufficient consideration to whether its enforcement activities related to all complaints, when taken in the aggregate, make the most strategic use of its limited resources. In addition, although Justice collects some data on the ADA complaints it receives, it does not systematically utilize these data to inform its overall enforcement activities in this area. Information on incoming complaints are entered into the Justice’s Correspondence Tracking System, and data on complaints that it pursues, also known as matters, are entered into its Interactive Case Management system. Justice officials told us that they do not systematically review information from these data systems given system limitations. For example, Justice is able to generate reports on complaints and matters associated with a specific statute (e.g., Title II or III of the ADA), but because no additional data on the type of complaint are entered into their systems it is not possible to generate a list of complaints and matters related to specific issues, such as testing accommodations. Additionally, because the two systems do not interface, Justice is unable to determine the disposition of all complaints. Of the five closed matters we reviewed, we were only able to track one back to the original complaint in the Correspondence Tracking System. In the absence of data that can be systematically analyzed, Justice relies on its institutional knowledge of complaints and matters to inform its enforcement efforts. For example, Justice officials told us they know which testing companies are more frequently cited in complaints. While institutional knowledge can be a useful tool to inform decisions, it may leave the agency at risk of losing critical knowledge. For example, with the recent retirement of two key officials from the Civil Rights Division’s Disability Rights Section, Justice has lost a major component of its institutional knowledge related to testing accommodations. We provided Justice with the names of testing companies included in our review to identify complaints and matters in their systems related to these companies. While Justice officials said they have conducted similar searches in reference to a specific complaint, they have not conducted systematic searches of their data systems to inform their overall enforcement efforts. In the absence of systematic reviews of information on complaints within their data systems, Justice may be missing out on opportunities to be strategic in identifying enforcement actions that would extend beyond one complaint or that would address widespread issues related to how testing accommodations decisions are made by testing companies. In addition to not making full use of its complaint data, Justice has not effectively coordinated with other agencies to inform its enforcement efforts. While Justice has broad responsibility for enforcing compliance with the ADA, Justice officials told us that they were not aware that Education and HHS were receiving and pursuing testing accommodations complaints for testing companies that were recipients of federal funding. Justice officials stated that they have not had regular meetings or exchanges related to testing accommodations with officials from Education or HHS. Officials from HHS also told us that relevant federal agencies provide expertise to one another when necessary, but that no formal or regular coordination meetings related to testing accommodations have been held with Justice or Education. By not coordinating with other federal agencies, Justice is limiting its ability to assess the full range of potential compliance issues related to testing accommodations. As part of its enforcement authority, the ADA authorizes Justice to conduct periodic compliance reviews. Justice reviews testing company compliance with the ADA in the course of investigating complaints, and officials said they could conduct a compliance review if they received a series of complaints against a particular company. However, Justice officials told us they have not initiated any compliance reviews that include a thorough examination of a testing company’s policies, practices, and records related to testing accommodations. Justice officials said it would be difficult to undertake a thorough compliance review because testing companies are not required to cooperate with such a review, and the agency lacks the authority to subpoena testing companies. However, in the absence of attempting to conduct such a compliance review, Justice is not in a position to fully assess whether this enforcement mechanism could prove beneficial to them. In its 2007-2012 Strategic Plan, Justice states that “outreach and technical assistance will continue to play a vital role to ensure compliance with the civil rights statutes.” However, Justice’s efforts to provide technical assistance related to testing accommodations have been limited. Justice officials told us they provide technical assistance by responding to calls that come into the ADA hotline or directly to the Disability Rights Section. For example, a disability advocate may reach out to an attorney to discuss a particular student’s situation. Justice officials told us they have discussed testing accommodations at meetings and conferences when invited to attend, although they have not made any presentations in recent years. Justice provides some guidance regarding testing accommodations in its ADA Title III Technical Assistance Guide. However, since the guide was last updated in 1993, it does not reflect recent ADA amendments, regulatory changes, or changes in accommodations available to test takers based on advances in technology. Justice officials also told us that they have not recently conducted outreach with testing companies. They reported that their resources have been focused on issuing regulations related to both testing accommodations and other topic areas. Testing company officials we interviewed reported that they had limited or no interaction with Justice, and one official said she would welcome more interaction with Justice to ensure the company was interpreting the laws correctly. An attorney who works with multiple testing companies included in our study told us that, because Justice only reviews complaints, which represent a small fraction of all testing accommodations requested, it may not have an accurate view of how often testing companies grant accommodations. Similarly, Justice has not leveraged its complaint and case data to target outreach and technical assistance based on the types of complaints most frequently filed. For example, Justice has not analyzed its complaint files to determine if multiple complaints filed had similar themes so that they could target their outreach to testing companies to clarify how to apply the regulations in these cases. Without targeted outreach, Justice misses opportunities to limit or prevent testing company noncompliance with the ADA. Given the critical role that standardized tests play in making decisions on higher education admissions, licensure, and job placement, federal laws require that individuals with disabilities are able to access these tests in a manner that allows them to accurately demonstrate their skill level. While testing companies reported providing thousands of test takers with accommodations in the most recent testing year, test takers and disability advocates continue to raise questions about whether testing companies are complying with the law in making their determinations. Justice, as the primary enforcement agency under the ADA, has taken steps to clarify how testing companies should make their determinations, but its enforcement lacks the strategic and coordinated approach necessary to ensure compliance. Without a systematic approach to reviewing complaints that it receives, Justice cannot assure that all complaints are consistently considered and that it is effectively targeting its limited resources to the highest priority enforcement activities. Continuing to target enforcement on a case-by-case basis does not allow Justice to consider what enforcement activities could extend beyond one case. Additionally, in the absence of coordination with other federal agencies, Justice is missing opportunities to strengthen enforcement by assessing the full range of potential compliance issues related to testing accommodations. Justice’s largely reactive approach to enforcement in this area may also limit its ability to address problems before trends of noncompliance are well established. After revising its testing accommodations regulations, Justice did not conduct outreach to testing companies or update its technical assistance materials to ensure the requirements were being applied consistently. Since we found testing companies believe their practices are already in compliance with the new regulatory requirements, it is unclear whether these changes will better protect the rights of students with disabilities. In order to ensure individuals with disabilities have equal opportunity to pursue their education and career goals, it is imperative for Justice to establish a credible enforcement presence to detect, correct, and prevent violations. We recommend to the Attorney General that Justice take steps to develop a strategic approach to target its enforcement efforts related to testing accommodations. For example, the strategic approach could include (1) analyzing its complaint and case data to prioritize enforcement and technical assistance, (2) working with the Secretaries of Education and HHS to develop a formal coordination strategy, and (3) updating technical assistance materials to reflect current requirements. We provided a draft of this report to Justice, Education, and HHS for review and comment. In written comments, Justice agreed with our recommendation, stating that its efforts to ensure the rights of individuals with disabilities are best served through a strategic use of its authority to enforce the ADA’s testing provisions. Justice highlighted some actions the agency will pursue to enhance enforcement in this area. With regard to analyzing its data, Justice stated that it utilizes complaint and case data through all stages of its work and makes decisions about which complaints to pursue based on ongoing and prior work. Also, Justice stated that it is looking for ways to improve its recordkeeping with respect to completed investigations and cases. While improving its recordkeeping is a positive action, we believe it is important for Justice to systematically review its data to strategically enforce the law. As we stated in our report, Justice has not utilized its data to develop a systematic approach to enforcement that would extend beyond one case, nor has it given sufficient consideration to whether its enforcement activities, when taken in the aggregate, make the most strategic use of its limited resources. Justice agreed to pursue discussions with both Education and HHS on the investigation and resolution of complaints about testing accommodations, and agreed to develop additional technical assistance materials on testing accommodations in the near future. Justice’s written comments appear in appendix II. In written comments, Education committed to working with Justice to coordinate efforts to ensure equity in testing for all students, including students with disabilities, consistent with the laws they enforce. Education’s written comments appear in appendix III. Justice and Education also provided technical comments, which were incorporated into the report as appropriate. HHS had no comments on the draft report. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to relevant congressional committees, the Attorney General, the Secretary of Education, the Secretary of Health and Human Services, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7215 or scottg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objectives of this report were to determine (1) what types of accommodations individuals with disabilities apply for and receive, and how schools assist them; (2) what factors testing companies consider when making decisions about requests for testing accommodations; (3) what challenges students and testing companies experience in receiving and granting testing accommodations; and (4) how federal agencies enforce compliance with relevant federal disability laws and regulations. For our study, we focused our review on a nongeneralizeable sample of 11 tests administered by 10 testing companies. We chose tests that are commonly used to gain admission into undergraduate, graduate, and professional programs and to obtain professional certification or licensure. We included the SAT and ACT in our study as these are the 2 most commonly used standardized tests for admission into undergraduate programs. To determine which graduate level and certification or licensure tests to include in our study, we reviewed data from the Integrated Postsecondary Education Data System (IPEDS) to establish the fields of study with the largest populations of students graduating with a masters or first professional degree. We also reviewed IPEDS data to determine the top three fields of study in which students with disabilities are enrolled. Based on these data, we identified 5 graduate and professional admissions tests, and 4 corresponding professional certification tests that could be required of students graduating with degrees in these fields. The fields of study included business, education, law, medicine, and pharmacy. To inform our findings, we interviewed officials from seven of the testing companies included in our study, and two companies submitted written responses to questions we provided. One testing company declined to participate in our study. (See table 1 for a list of the testing companies and tests included in our study.) The views of the testing company officials we spoke with or received responses from cannot be generalized to all testing companies that provide accommodations to applicants with disabilities. To determine the types of accommodations requested by individuals with disabilities and granted by testing companies, we reviewed data provided by testing companies on accommodations requested and granted, interviewed testing company officials, interviewed disability experts, and reviewed literature to understand the types of accommodations applicants with disabilities might require. GAO provided testing companies with a standardized data collection instrument that covered a range of topics including the types of disabilities students have and the types of accommodations they requested and were granted in the most recent testing year. We asked for data on the number of accommodations requested and granted by type of accommodation and type of disability. In some cases, testing companies did not collect data in the manner requested by GAO and instead provided alternate data to help inform our study. Because of the variance in how testing companies collect data on disability type, we aggregated data into broad disability categories. We identified the following limitations with data provided by the testing companies, in addition to those noted throughout the report. We excluded data testing companies provided on applicants with multiple disabilities because these data were reported differently across testing companies. For example, one testing company provided a disability category called multiple disabilities while another told us that, in cases where an applicant has more than one disability, they capture in their data the disability most relevant to the accommodation. In general, testing companies’ data reflect those requests that were complete, not those for which a decision was pending in the testing year for which data were provided. In our data request, we asked questions about the reliability of the data, such as whether there are audits of the data or routine quality control procedures in place. Based on their responses to these questions, we believe the data provided by the testing companies were sufficiently reliable for the purposes of this report. To understand how schools assist individuals in applying for accommodations, we interviewed officials from a nongeneralizable sample of 8 high schools and 13 postsecondary schools and eight individuals with disabilities who had applied for testing accommodations. (See table 2 for a complete list of schools.) To select schools, we reviewed data from Education’s Common Core of Data and IPEDS databases and chose a nongeneralizable sample based on characteristics such as sector (public and private, including nonprofit and for-profit postsecondary), geographic diversity (including urban, suburban, and rural settings for high schools), total enrollment, and size of population of students with disabilities. We also reviewed publicly available lists of colleges and universities to identify postsecondary schools that offered academic programs in the fields corresponding to the tests we chose. We identified individuals with disabilities to interview based on referrals from experts and school officials and selected them based on their representation of a range of disabilities and tests for which they sought accommodations. To determine the factors testing companies consider when making their decisions, we reviewed policies and procedures for requesting accommodations found on testing companies’ Web sites and reviewed relevant federal laws and regulations pertaining to testing companies. However, we did not evaluate whether these policies and procedures as written or described to us in interviews—either on their face or as applied in the context of responding to individual requests for accommodations— were in compliance with relevant laws or regulations. Accordingly, statements in this report that describe the policies and procedures used by testing companies to review and respond to requests for accommodations, should not be read as indicating that testing companies are either in or out of compliance with applicable federal laws. We also conducted interviews with seven testing companies and reviewed written responses to our questions from two companies that declined our request for an interview. One company declined to participate in our study. To identify the challenges that applicants and testing companies may experience in receiving and granting accommodations, we interviewed eight individuals with disabilities to learn about their experiences in obtaining accommodations, interviewed testing company officials and reviewed written responses from testing companies about the challenges they face in granting accommodations, interviewed disability advocacy groups and researchers with expertise in various types of disabilities, and reviewed literature. The testing companies that participated in our study reviewed draft statements in this report, and their comments were incorporated as appropriate. To determine how federal agencies enforce compliance with relevant federal laws and regulations, we reviewed pertinent laws and regulations to identify the responsibilities of federal agencies and interviewed officials from Justice, Education, and HHS to learn about the actions these agencies take to enforce compliance. In addition, we obtained data from Justice, Education, and HHS on the number of closed complaints they received between fiscal years 2007 and 2010 related to testing accommodations for the 10 testing companies included in our study. We also reviewed selected court cases regarding testing accommodations. Since Justice receives the majority of complaints, we reviewed all of Justice’s available paper files associated with complaints and matters pertaining to the testing companies in our study. We reviewed the paper files to better understand what action Justice takes in responding to complaints and enforcing testing company compliance. We also reviewed all of Education’s closure letters and HHS’ complaint and closure letters, pertaining to testing companies in our study from fiscal years 2007 to 2010, to better understand what action these agencies take. We reviewed existing information about the data and interviewed agency officials knowledgeable at Justice, Education, and HHS. We identified some limitations with the data as we described in our report. Justice reported receiving 13,140 ADA-related complaints between fiscal years 2007 and 2010. Justice used key word searches of the data to identify 59 closed complaints related to testing accommodations involving 8 of the 10 testing companies included in our study. Justice also identified five closed matters. We were unable to determine the final disposition of 27 complaints due to gaps in Justice’s data systems and paper files. By comparing our list of testing companies against their complaints, Education was able to identify 41 complaints. HHS was able to identify only 1 complaint that was later withdrawn. Due to limitations with the data, we cannot generalize the results of our file review. We conducted this performance audit from October 2010 to November 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual above, Debra Prescott (Assistant Director), Anjali Tekchandani (Analyst-in-Charge), Jennifer Cook, Nisha Hazra, and Justine Lazaro made significant contributions to this report. Jean McSween provided methodological support; Jessica Botsford provided legal support; Susan Bernstein assisted in report development; and Mimi Nguyen assisted with graphics.
Standardized tests are often required to gain admission into postsecondary schools or to obtain professional certifications. Federal disability laws, such as the Americans with Disabilities Act (ADA) require entities that administer these tests to provide accommodations, such as extended time or changes in test format, to students with disabilities. GAO examined (1) the types of accommodations individuals apply for and receive and how schools assist them, (2) factors testing companies consider when making decisions about requests for accommodations, (3) challenges individuals and testing companies experience in receiving and granting accommodations, and (4) how federal agencies enforce compliance with relevant disability laws and regulations. To conduct this work, GAO interviewed disability experts; individuals with disabilities; officials from high schools, postsecondary schools, testing companies; and officials from the Departments of Justice (Justice), Education, and Health and Human Services (HHS). GAO also reviewed testing company policies and data, federal complaint and case data for selected testing companies, and relevant laws and regulations. Among accommodations requested and granted in the most recent testing year, approximately three-quarters were for extra time, and about half were for applicants with learning disabilities. High school and postsecondary school officials GAO interviewed reported advising students about which accommodations to request and providing documentation to testing companies, such as a student's accommodations history. Testing companies included in GAO's study reported that they grant accommodations based on their assessment of an applicant's eligibility under the ADA and whether accommodation requests are appropriate for their tests. Testing companies look for evidence of the functional limitations that prevent the applicant from taking the exam under standard conditions. They also consider what accommodations are appropriate for their tests and may grant accommodations that were different than those requested. For example, one testing company official told GAO that applicants with attention deficit/hyperactivity disorder all might request extra time, but may be granted different accommodations given their limitations--extra time for an applicant unable to maintain focus; extra breaks for an applicant unable to sit still for an extended time period; a separate room for an easily distracted applicant. Documenting need and determining appropriate accommodations can present challenges to students and testing companies. Some applicants GAO interviewed found testing companies' documentation requirements difficult to understand and unreasonable. Most applicants GAO spoke with said they sought accommodations that they were accustomed to using, and some found it frustrating that the testing company would not provide the same accommodations for the test. Testing companies reported challenges with ensuring fairness to all test takers and maintaining the reliability of their tests when making accommodations decisions. Testing company officials said that reviewing requests that contain limited information can make it difficult to make an informed decision. Some testing company officials also expressed concern with being required to provide accommodations that best ensure an applicant's test results reflect the applicant's aptitude rather than providing what they consider to be reasonable accommodations. Federal enforcement of laws and regulations governing testing accommodations is largely complaint-driven and involves multiple agencies. While Justice has overall responsibility for enforcing compliance under the ADA, Education and HHS have enforcement responsibilities under the Rehabilitation Act for testing companies that receive federal financial assistance from them. Education and HHS officials said that they investigate each eligible complaint. Justice officials said they review each complaint at in-take, but they do not make a determination on every complaint because of the large volume of complaints it receives. Justice has clarified ADA requirements for testing accommodations primarily by revising its regulations, but it lacks a strategic approach to targeting enforcement. Specifically, Justice has not fully utilized complaint data--either its own or that of other agencies--to inform its efforts. Justice officials said that they reviewed complaints on a case-by-case basis but did not conduct systematic searches of their data to inform their overall approach to enforcement. Additionally, Justice has not initiated compliance reviews of testing companies, and its technical assistance on this subject has been limited. GAO recommends that the Department of Justice take steps to develop a strategic approach to enforcement such as by analyzing its data and updating its technical assistance manual. Justice agreed with GAO's recommendation. GAO recommends that the Department of Justice take steps to develop a strategic approach to enforcement such as by analyzing its data and updating its technical assistance manual. Justice agreed with GAO’s recommendation.
To assess IRS’s 2005 filing season performance in the four key filing season activities—processing, telephone assistance, face-to-face assistance, and Web site—compared to goals, past performance, as well as initiatives intended to improve performance, we reviewed and analyzed IRS reports, testimonies, budget submissions, and other documents and data, including workload data and data from IRS’s current suite of balanced performance measures and annual goals; reviewed legislation, policies, and procedures; reviewed related TIGTA reports and interviewed TIGTA officials about IRS’s performance and initiatives; followed up on our recommendations made in prior filing season and tested for statistical differences between yearly changes for various observed operations at IRS’s Atlanta paper processing center, and Atlanta and Pittsburgh call centers, all of which are managed by IRS’s Wage and Investment operating division (W&I); 3 of IRS’s approximately 400 walk-in locations; and 3 of over 14,000 volunteer sites. analyzed information posted to IRS’s Web site based on our knowledge of the type of information taxpayers look for, and assessed the ease of finding information, as well as the accuracy and currency of the data on the site; reviewed information from companies that evaluate Internet reviewed staffing data for paper and electronic processing, telephone assistance, and walk-in assistance; interviewed IRS officials about current operations, performance relative to 2005 performance goals, and prior filing season performance, trends, and significant factors and initiatives that affected or were intended to improve performance; and interviewed representatives of large private and nonprofit organizations that prepare tax returns and trade organizations that represent both individual practitioners and tax preparation companies. This report discusses numerous filing season performance measures and data that cover the quality, accessibility, and timeliness of IRS’s services, which we have used to evaluate IRS’s performance in key areas for years. Although some measures could be further refined, the majority of IRS’s filing season measures have the attributes of successful measures, including objectivity and reliability. We reviewed IRS documentation, interviewed IRS officials about computer systems and data limitations, and compared those results to GAO standards of data reliability. As a result, we determined that the IRS data we are reporting are sufficiently reliable for assessing IRS’s filing season performance. Data limitations are discussed where appropriate. We conducted our work at IRS headquarters in Washington, D.C.; the Small Business/Self-Employed Division headquarters in New Carrollton, Maryland; the W&I Division headquarters, the Joint Operations Center (which manages telephone service), and a telephone call site in Atlanta, Georgia; a telephone call site in Pittsburgh, Pennsylvania; and walk-in and volunteer locations in Georgia and Maryland. We selected these offices for a variety of reasons, including the location of key IRS managers, such as those responsible for telephone, walk-in, and volunteer services. Hurricanes Katrina and Rita struck just as we were completing our 2005 filing season review. Because Katrina and Rita occurred when we were finishing our work, we did not assess the effectiveness of IRS’s actions. We performed our work from January through October 2005 in accordance with generally accepted government auditing standards. IRS received over $10 billion in fiscal year 2005 to fund over 96,000 full-time equivalents (FTE). Of the total, processing and taxpayer services account for 41 percent, almost 40,000 FTEs, as shown in figure 1. Of the roughly 40,000 FTEs, almost 16,000, just less than 40 percent, were budgeted just for processing, most of which occurs during the filing season. IRS provides a variety of taxpayer services. Tens of millions of taxpayers receive telephone assistance. Taxpayers call IRS to inquire about their refunds, the tax laws, or their accounts. The calls are answered by CSRs or automated services. For face-to-face assistance, IRS has approximately 400 walk-in sites where taxpayers ask basic tax law questions, get account information, receive assistance with their accounts, and have returns prepared (if annual gross income is $36,000 or less). Also, low-income and elderly taxpayers get returns prepared at over 14,000 volunteer sites run by community-based coalitions that partner with IRS. IRS’s Stakeholder Partnership, Education, and Communication (SPEC) organization fosters relationships between IRS and the nonprofit community to provide an alternative means for taxpayers to receive volunteer return preparation assistance. According to IRS, SPEC officials identify and select partners, such as the American Association of Retired Persons, that meet taxpayer needs, such as tax assistance for the elderly, and help train, provide resource materials, and oversee operations at these partners’ facilities. In some cases, IRS awards grants, trains and certifies volunteers, and provides reference materials, computer software, and computers to these volunteers. IRS now provides many Internet services that did not exist a few years ago. For example, the “Where’s My Refund” feature has the benefit of reducing phone calls and enables taxpayers to use the IRS Web site to find out if IRS received their tax returns and whether their refunds were processed. IRS’s filing season activities and associated workload volumes are depicted in figure 2. IRS’s performance measures show that IRS has improved its performance processing individual income tax returns and nearly met or exceeded most of its 2005 goals. The continued growth in the number of tax returns filed electronically resulted in more than half of all individual income tax returns being filed electronically for the first time. Despite the continued growth, IRS is not on track to meet its 80 percent long-term electronic filing goal. Electronic filing mandates imposed by several states on tax practitioners who meet certain criteria have increased electronic filing of federal individual income tax returns. However, stakeholders have noted information is lacking on the costs and burdens of mandating electronic filing. As of September 16, 2005, IRS processed about 130 million individual tax returns, including 68 million returns electronically, with no significant disruptions and issued 99 million refunds in a timely manner. According to IRS data, IRS equaled or exceeded its 2004 performance and nearly met or exceeded its 2005 goals for the following seven measures (see app. 1 for further details). Deposit error rate: the percentage of payments applied in error. Deposit timeliness, paper: the amount of interest forgone by not depositing payments the business day after receipt. Letter error rate: the percentage of letters issued to taxpayers with errors. Notice error rate: the percentage of incorrect notices issued to taxpayers. Refund error rate, individual: the percentage of refunds with IRS-caused errors in the entity information (e.g., incorrect name or Social Security number). Refund timeliness, paper: the percentage of refunds issued within 40 days or less for individual tax returns filed on paper. Productivity: the weighted volume of work processed per staff year. For one measure IRS’s performance declined and the 2005 goal was not met. Refund interest paid rate: the interest paid per $1 million of refunds issued late. One measure was new for 2005, and IRS met the goal. Individual Master File efficiency: the number of tax returns processed per staff year. Although IRS’s performance measures indicate smooth processing and improved performance, we have previously recommended that IRS adopt others. Specifically, we recommended that IRS adopt a refund timeliness performance measure for individual tax returns filed electronically to promote growth in electronic filing. This measure could help IRS better monitor and evaluate electronic filing performance and determine the impact of initiatives intended to increase electronic filing. However, IRS does not plan to implement such a measure, stating it would not enhance performance and, in fact, might be counterproductive if disappointed taxpayers who had to wait longer than expected to receive their refunds were to call or seek face-to-face assistance. Although not publicly reported, IRS data shows that refunds associated with returns filed electronically are received in about half the time as those filed on paper. IRS publications also inform taxpayers that they can receive their tax refund in 10 days if they file electronically if they use direct deposit. The number and costs of refund anticipation loans (RAL) are evidence that taxpayers might benefit from having more information about the time it takes to get refunds. RALs are very short-term loans issued while taxpayers wait for their refunds. In a previous testimony, we found examples of interest rates on RALs of well over 100 percent. The measure could be designed to minimize the problem of disappointed taxpayers calling IRS by, for example, reporting the number of days within which 90 percent of refunds are issued. For the first time, IRS used the Customer Account Data Engine (CADE) to process the simplest taxpayer returns, that is, 1040EZs. CADE is important because it is the foundation of IRS’s modernization effort and will ultimately replace the Individual Master File, which currently houses taxpayer data for individual filers, with new technology, applications, and relational databases. As of August 2005, CADE processed over 1.4 million returns with no significant problems, handled $424 million in refunds, and shortened the average turnaround for refunds from 7 days to 3.5 days. A recent TIGTA report noted that information from tax returns was accurate and posted on time to CADE accounts. IRS released the next update to CADE in mid-September 2005; another release is scheduled for January 2006 and is on schedule, according to an IRS division chief. IRS officials attribute this year’s smooth processing to adequate planning and relatively few tax law changes. Tax practitioners, who last year prepared approximately 60 percent of all individual income tax returns, agreed that the processing of individual tax returns has gone smoothly during the 2005 filing season. Representatives from the National Association of Enrolled Agents, National Society of Certified Public Accountants, and other tax-related organizations had positive comments about IRS’s processing of individual tax returns. Similarly, TIGTA officials told us that IRS generally processed individual tax returns smoothly in 2005. Electronic filing remains important to IRS because electronic returns cost less to process than paper returns. While obtaining accurate cost estimates may be problematic given inadequacies in IRS’s financial accounting system, IRS estimates it saves $2.15 on every individual tax return that is processed electronically. According to IRS data, electronic filing has allowed IRS to use about 300 fewer staff years to process paper returns in 2005 than in 2004, which is reflected in budget savings for processing. This is in addition to about 1,000 staff years saved between 2002 and 2003. IRS anticipates additional staff-year savings when paper processing is eliminated in the Submission Processing Center in Memphis, Tennessee, after the 2005 filing season. This is the first year that more than half of the 130 million returns filed were filed electronically. The number of individual tax returns filed electronically increased by about 11 percent, to an estimated 67.9 million electronic individual tax returns as of September 16, 2005. IRS is forecasting about a 9 percent increase in the number of individual income tax returns filed electronically in 2006. Over the years, IRS has taken numerous actions to encourage electronic filing by taxpayers and tax practitioners, including making electronic filing free to most taxpayers via the Free File Alliance program on the IRS Web site; making the process totally paperless if a taxpayer uses a personal identification number to sign their tax return; making over 99 percent of all individual tax forms suitable for electronic allowing electronic payment of balance due payments; and surveying taxpayers and tax practitioners in response to a recommendation in our 2001 filing season report to determine why 40 million tax returns were prepared on a computer but filed on paper. For the 2005 filing season, IRS took the following actions to encourage taxpayers and tax practitioners to file electronically. IRS contacted about 4,600 tax practitioners who prepared tax returns on computers but then filed paper tax returns and encouraged them to file tax returns electronically. IRS estimates that these types of practitioners file over 15 million paper tax returns annually; accepted e-filed returns from married taxpayers filing separately who reside in community property states; and made four more forms available for electronic filing. Despite these actions, IRS is not on track to achieve its long-term goal of having 80 percent of all individual income tax returns filed electronically by 2007. IRS officials do not want to abandon the goal because it serves as a symbol of IRS’s determination to increase electronic filing. As we have previously reported, IRS’s progress toward the goal has required enhancement of its technology, development of software to support electronic filing, education of taxpayers and practitioners, and other steps that could not be completed in a short time frame. To achieve its long-term goal, however, IRS would have to average about a 26 percent growth rate over the next 2 years. Assuming a continuation of the current growth rates of 11.08 percent for individual tax returns filed electronically and 1.18 percent for the total number of individual tax returns filed, IRS would receive an estimated 63 percent of all individual income tax returns filed electronically in 2007. This would leave IRS about 23 million short of the approximately 107 million individual income tax returns that would need to be filed electronically to meet the goal. We estimate that if IRS could close this gap, it could save about $49 million in processing costs. IRS, the Electronic Tax Administration Advisory Committee (ETAAC), and GAO do not expect IRS to maintain this year’s rate of growth. IRS is predicting declining growth rates in 2006 and 2007, and in 2003, ETAAC concurred with IRS’s prediction. IRS officials stated that, to achieve its electronic filing goal, tax practitioners and taxpayers who prepare about 40 million tax returns on computers but file paper returns would have to convert to filing electronically; however, IRS’s efforts have not resulted in converting a large portion of these filers from paper to electronic filing. Electronic filing mandates imposed by several states on tax practitioners who meet certain criteria, such as filing 100 state tax returns or more, have increased electronic filing of federal individual income tax returns. According to IRS, the growth rate in 2004 of federal tax returns filed electronically was greater than expected, because five states, including California, mandated electronic filing of state tax returns prepared by qualified tax practitioners who filed a certain number of state returns. In 2005, three more states mandated electronic filing of state tax returns prepared by qualified tax practitioners. These state mandates have contributed to an increase in electronic filing of not only state tax returns, but of federal individual tax returns as well. According to IRS officials, these mandates led to significantly more electronic filing of federal tax returns in these states because tax practitioners converted their entire practices to electronic filing. In total, the eight states with electronic filing mandates added an estimated 5.6 million additional electronically-filed federal income tax returns over the 2 years. For 2006, several additional states, including New York, are mandating electronic filing for state returns for some tax practitioners. In its 2004 report to Congress, ETAAC stated that federal electronic filing growth may now be entirely dependent on what states are doing, rather than actions taken by IRS. IRS cannot require states to mandate electronic filing. However, IRS continually informs states of the benefits of electronic filing in hopes that more states will institute mandates. The growing use of mandates by the states could lead to more discussion of mandates at the federal level. In the past, ETAAC has recommended that Congress should support mandated electronic filing by tax practitioners because in ETAAC’s view, electronic filing mandates are key to IRS achieving its 80 percent goal. IRS knows more about the benefits of mandated electronic filing than it knows about the costs. The benefits are reduced processing costs to IRS, and faster issuance of refunds to taxpayers. As already discussed, IRS has an estimate of how much it saves on each electronic return. However, in 2005, ETAAC noted that decision makers lack information on the costs and burdens of electronic filing. The costs are borne largely by tax practitioners and taxpayers. In the past, tax practitioners have complained about the costs and burdens associated with converting their businesses to electronic filing, although benefits have also been reported, once the businesses converted. Knowing more about the nature and magnitude of these costs could provide fact-based information that could help inform any future debate about making electronic filing mandatory for certain categories of tax practitioners or taxpayers. ETAAC believes that IRS is well positioned to gather such information. IRS made a strategic decision to reduce access to its telephone service to accommodate a budget reduction because IRS officials viewed it as flexible area for absorbing such reductions without significantly affecting taxpayer service. As a result, the average time taxpayers waited for CSRs increased and more taxpayers hung up without receiving service than last year. In contrast, the accuracy of CSR answers to millions of tax law and account questions significantly improved compared to past performance. IRS received 72 million calls on its toll-free telephone lines through mid- July 2005. Over a third of those calls—31 million—were from callers trying to obtain information on the status of their tax refunds. Another 16 and 20 million calls were about tax law or taxpayer account questions respectively. The rest were miscellaneous calls. Figure 4 shows how IRS handled those calls. Toll-free telephone calls from taxpayers typically are routed through IRS’s telephone system based on taxpayers’ response to prompts and are then answered by CSRs or by automated recordings. IRS’s automated service handled 24 million calls and CSRs handled 23 million. The remaining 26 million calls came in after business hours, were transferred, were disconnected, or the caller hung up before receiving service. IRS devotes significant resources to providing access to CSRs. Since 2001, IRS has devoted at least 8,300 staff years per year to telephone service. IRS estimates that it will use 8,561 staff years to answer telephone calls in 2005, primarily during the filing season. According to IRS officials, IRS made a strategic decision to reduce its CSR level of service goal from 85 to 82 percent to accommodate a budget reduction of about $5 million. (see app. II). In response, IRS reduced the number of FTEs devoted to phone service by less than 1 percent, resulting in taxpayers having less access to CSRs. Also, due to a lower call volume than last year, as of July 16, IRS had used 7 percent fewer FTEs than planned for to answer telephones. IRS officials chose to reduce telephone access because they viewed it as a more flexible area to absorb budget reductions than, for example, processing. IRS officials said that telephone access had improved in recent years to a more acceptable level, giving IRS flexibility to adjust CSR level of service. As a result of IRS reducing access to its telephone assistors, the average time taxpayers waited for CSRs (average speed of answer) increased, and more taxpayers hung up (abandoned rate) as shown in table 1. IRS officials told us that these declines are acceptable and IRS is effectively managing its resources while still providing a high level of service. According to the IRS Oversight Board’s 2004 Taxpayer Attitude Survey, most taxpayers are willing to wait an average of 11 minutes to speak to a CSR. On the other hand, table 1 shows that taxpayers abandoned more calls in 2005 when the average speed of answer increased. According to IRS officials, there are no government or industry standard definitions for telephone measures, such as for average speed of answer. IRS is part of a new government wide group organized to baseline, research, benchmark, standardize, and implement a minimum set of expectations for agencies with telephone operations so that agencies can be measured and compared against an objective standard to demonstrate success and improvement. Some taxpayers who hang up may not be receiving poor service. Preliminary results from IRS analyses of callers who hung up show some taxpayers hang up after hearing the prompt to visit IRS’s Web site. Rather than wait for a CSR, these taxpayers may have switched to IRS’s Web site to get the information they needed. Midway through the 2005 filing season, IRS began collecting detailed data on why taxpayers hang up. According to IRS officials, they will continue to collect and analyze the hang-up data to further determine when and why taxpayers are hanging up. This year represents the first time since 1998 that IRS reduced its annual level of service goal. However, it is difficult to assess what this year’s decline means in the longer term because IRS does not have long-term goals for taxpayer service. A long-term CSR level of service goal may help Congress and other stakeholders understand whether this year’s reversal of telephone access is the beginning of a trend. As will be discussed in a later section, we recognize that setting a long-term goal for telephone service would depend on assumptions about available resources, but that is part of the value of long-term goals. They help clarify the trade-off between service and other priorities. As table 2 shows, compared to goals and past performance, the accuracy of CSR responses to tax law and account questions significantly improved. First, IRS officials attributed the improved tax law accuracy rate primarily to changes in the Probe & Response (P&R) Guide, a publication that CSRs use to help answer tax law questions. In the last 2 years, IRS blamed problems with the P&R Guide for declines in accuracy. Unlike previous years, IRS tested this year’s changes before disseminating the guide to CSRs. Second, with respect to the accuracy of accounts inquiries, IRS officials stated that IRS improved the rate and exceeded the goal because of an improved quality review process, which, in their view, gives employees a heightened sense of their contribution to the agency’s mission. Part of that review process is Contact Recording, a system for recording all contacts between taxpayers and CSRs including, for some calls, the computer screen displays used by CSRs. Managers can then review the contacts in their entirety. IRS officials told us that Contact Recording has resulted in employees receiving more constructive feedback and more efficient and consistent scoring of performance and quality by managers, which likely has improved both tax law and accounts accuracy. One IRS manager we spoke with stated that she liked the system because it allows managers to listen to the prerecorded contact at their convenience, and therefore provide more complete feedback to employees. Furthermore, she said that Contact Recording is more efficient than the method used before, wherein managers listened to selected calls in “real time” and provided CSRs feedback based on what the managers heard during the call. As noted in our 2004 filing season report, IRS decided to implement Contact Recording at all call sites by the end of the 2005 filing season. IRS was slightly behind schedule on implementing this system by the end of this year’s filing season. IRS had two efforts intended to improve telephone services for the 2005 filing season. First, IRS continued to implement Contact Recording, as previously discussed. Second, in an effort to streamline the process for managing its telephone workforce, and in turn save FTEs, IRS began to implement the Centralized Contact Center Forecasting and Scheduling project in 2005. The project is designed to assess IRS’s current telephone workforce management efforts and determine the most appropriate and efficient solution for managing that workforce. IRS has held initial meetings to solicit team members and define high-level requirements for the project. IRS has a project plan in place and is on schedule to meets its deadlines for this project. Past trends have continued as fewer taxpayers used IRS’s walk-in services and more used volunteer tax return preparation services. These trends are consistent with IRS’s strategy to direct taxpayers away from face-to-face assistance provided by its employees to less costly alternatives. However, IRS lacks reliable data on quality that could be used to compare the two services and understand the impact of IRS’s strategy on taxpayers. IRS initiated quality improvement programs for both services intended to improve data reliability, but these programs have yet to produce sufficiently reliable data. Fewer taxpayers used IRS’s approximately 400 walk-in sites during the 2005 filing season, continuing a trend since 2001. At these sites, IRS employees provide taxpayers with information about their tax accounts, answer a limited scope of tax law questions, and prepare returns if the taxpayer’s annual gross income is $36,000 or less. As reflected in figure 5, the total number of walk-in taxpayer contacts during the 2005 filing season declined by nearly 385,000 (10 percent) from last year. Contacts for return preparation declined by almost 68,000 (22 percent) during the same period. The declines in walk-in usage were consistent with IRS’s strategy of reducing costly face-to-face assistance in favor of other service options such as the telephone and Web site. While some of the decline in return assistance is likely due to taxpayers taking advantage of other increasingly available and attractive alternatives, like the improved Web site, some of it is attributable to IRS’s attempt to direct taxpayers away from face-to-face assistance. For example, since 2003, IRS has required appointments for most taxpayers seeking return preparation service at its sites. As we have previously reported, this decline and the shift of taxpayers from walk-in sites to other service options is important because it has allowed IRS to transfer time-consuming services, such as return preparation, from IRS to other less costly alternatives that can be more convenient for taxpayers. As a result, IRS devoted fewer resources—as represented by direct FTEs—to providing return preparation and other services during the 2005 filing season. As reflected in figure 6, IRS reduced the number of direct FTEs devoted to walk-in sites during the filing season by over 4 percent overall and by 22 percent for return assistance from the same period last year. In previous years, IRS transferred enforcement staff to walk-in sites to help staff handle the workload that occurs during the filing season. IRS has nearly eliminated this practice, which pulled the staff away from performing enforcement work, and instead hired more full-time staff to cover the workload during the filing season. To prevent the newly expanded walk-in staff from experiencing downtime after the filing season, when the workload drops off, since fiscal year 2004, IRS began having walk-in staff perform some collections work after the filing season. For example, between October and July 2005, IRS used 53 of its 602 total direct FTEs (9 percent) to handle this collections work. According to IRS officials, this has provided sufficient work to keep walk- in staff productive all year and greatly reduced dependence on enforcement staff. Besides regulating the filing season workload, IRS officials stated that handling these individual taxpayer collection cases at walk-in sites could help them address overdue collections that, in their view, may be overlooked by the normal collections process. Some IRS officials question moving collections work out of the normal collection process because IRS lacks information about the effectiveness of conducting such work using walk-in site staff. According to IRS officials, IRS will have a reporting system in January 2006 that will allow it to analyze the results of that work and compare it to normal collection results to determine the most effective place to do the work. IRS is on schedule for implementing this system, according to IRS officials. Furthermore, IRS is reevaluating the services provided at walk-in sites, including collections work. IRS lacks reliable and comprehensive data on the quality of the services provided at walk-in sites. In 2004, IRS began implementing a program to collect data on the quality of services provided to taxpayers at walk-in sites, and we noted concerns with the reliability of the data due to the collection method. Under this program, managers directly observe a sample of employee interactions with taxpayers. We were concerned that employees’ performance could be influenced by the knowledge that they are being observed by managers, biasing the sample results. Also, IRS found that managers were not consistently coding employee performance. As a result, we and TIGTA have stated that the quality review program used to monitor walk-in sites does not provide reliable data and made recommendations intended to improve quality measurement. To obtain reliable and comprehensive data on the quality of services provided, IRS is implementing Contact Recording at walk-in sites, which is similar to the method used for IRS’s telephone service, whereby IRS employee and taxpayer interactions will be recorded and reviewed later by managers. IRS piloted Contact Recording at a small number of walk-in sites, ending in July 2005, and decided to continue implementation. The results of the Contact Recording pilot and the current direct observation method are quite different. According to IRS officials, Contact Recording results showed quality to be significantly worse than the results from the direct observation method. However, IRS is not scheduled to fully implement Contact Recording at walk-in sites until December 2007. Until that occurs, IRS will lack reliable and comprehensive data. While IRS appears to be on schedule based on its implementation plan for Contact Recording, it has previously experienced delays implementing other parts of its quality review program. In fact, in a previous report we made a recommendation to help ensure that IRS addresses the causes of past delays in implementing its quality program at walk-in sites. For 2006, IRS asked TIGTA to assess the accuracy of tax law assistance, one service offered at walk-in sites. The results of TIGTA’s requested assessment of tax law assistance would be unreliable because sites they covered would be selected judgmentally and the results could not be projected to all sites. Also, IRS will continue to lack data on the other services it provides, namely account assistance and return preparation. In addition to the lack of reliable data on quality, IRS lacks complete data on what kind of services these sites should offer. As TIGTA and the National Taxpayer Advocate have noted, IRS lacks accurate and complete management information on walk-in sites. For example, TIGTA reported that (1) IRS has limited information on the exact numbers and types of services provided at IRS’s walk-in sites as well as information on what kind of face-to-face service taxpayers need or want and (2) the lack of information hinders IRS’s ability to make appropriate decisions about the locations and services it provides taxpayers. Consequently, TIGTA made recommendations to IRS to enhance the validity and reliability of information on taxpayer needs and ensure that the services provided effectively and efficiently address these needs. In contrast to IRS’s walk-in sites, the numbers of taxpayers seeking return preparation assistance at about 14,000 volunteer sites increased by nearly 13 percent from last year (see fig. 7). Again, this increase is consistent with IRS’s strategy to direct taxpayers away from face-to-face IRS assistance to volunteer sites. As with its walk-in sites, IRS lacks reliable data on the quality of services provided at volunteer sites. Ensuring quality service at volunteer sites is important because not only does IRS provide assistance to volunteer sites, but IRS actively promotes volunteer sites as an alternative for face-to-face services at its walk-in sites. Furthermore, we and TIGTA have reported concerns about the quality of return preparation assistance provided at volunteer sites and have made recommendations to remedy the concerns, some of which date back to 2000. More recently, a TIGTA official told us that that while improvements have been made at volunteer sites, continued effort is needed to ensure the accuracy of services provided. IRS recognized the data quality problems and proposed a strategy to address them, but there is still insufficient data to determine the quality of services provided. As part of IRS’s strategy for improving quality at volunteer sites, it developed three methods to monitor quality during the 2005 filing season--observation reviews, site reviews, and mystery shopping. However, IRS halted its use of observation reviews immediately after starting due to concerns raised by the National Taxpayer Advocate and some partner organizations that observation reviews violate taxpayer privacy and unfairly target low-income taxpayers. IRS maintained its two other methods, but according to IRS officials, neither of these methods are as comprehensive as the observation method in following the process volunteers used to prepare returns, such as appropriate probing techniques to acquire dependency information from taxpayers. Furthermore, IRS conducted only 14 of the proposed 100 mystery shopping visits, which did not provide sufficient results. As a result, the methods used to collect data on quality at volunteer sites were inadequate for monitoring and evaluating quality at volunteer sites in 2005. IRS has proposed conducting return reviews instead of observational reviews for the 2006 filing season. During each site review, IRS officials plan to select three tax returns to examine by comparing a taxpayer’s return against their supporting tax-related documents, as well as other information obtained by the volunteers, to determine the accuracy of the return. According to IRS officials, IRS has consulted with several partner groups participating in the volunteer program about the return reviews. The partners did not express the same concerns with return reviews as those they had with observation reviews. IRS intends to use return reviews, along with site and mystery shopping reviews, in an implementation plan for the 2006 filing season as part of its strategy to monitor and evaluate quality of return preparation at volunteer sites. According to IRS officials, the plan is on schedule for critical events, such as developing publications and training. For example, IRS officials told us that they were working to avoid the logistical problems of last year that resulted in fewer than the anticipated number of mystery shopping reviews. IRS’s Web site is important because it provides taxpayers and tax practitioners with assistance without having to contact IRS employees and results in IRS saving resources. Our review and external Web site ratings of IRS’s Web site and various other data indicates that it performed well, was user friendly, and was used extensively. This is consistent with IRS’s strategy to improve taxpayer service by providing options for automated interaction with the IRS, such as “Where’s My Refund.” IRS’s Web site was user friendly, based on our testing for the types of information taxpayers look for when accessing the Web site. Specifically, our testing found it (1) was accessible and easy to navigate, (2) had no broken links, (3) did not have outdated or inconsistent data, (4) had facts and information logically arranged and easy to obtain, (5) had a search function that worked well, and (6) had a quick response time. Two independent assessments done by Keynote and Brown University's Center for Public Policy confirm our observations of IRS's Web site. Keynote, an independent Web site rater of Internet performance that does a weekly study during the filing season, reported that IRS’s Web site performed very well. For example, it was ranked in the top 4 out of 40 government Web sites and users were able to access the IRS Web site in less than 1 second during the entire filing season. The same independent weekly assessment reported that IRS ranked first or second in response time for downloading data. Brown University’s Taubman Center for Public Policy rated IRS's Web site among the upper half of 61 federal government Web sites in providing service to citizens. Taxpayers can ask IRS tax law questions via the agency’s Electronic Tax Law Assistance (ETLA) program on its Web site. The substantial increase in IRS’s performance for the ETLA program this year is due to the fact that IRS received significantly fewer questions than last year, which allowed it to improve its timeliness and accuracy in responding to those questions. IRS received fewer questions because it kept the ETLA function at the same, less prominent location on the Web site that it was moved to last year. As we reported last year, IRS moved the ETLA function on its Web site to a less prominent location in the middle of the filing season last year. According to IRS officials, this significant increase in performance is because the number of questions being submitted declined from about 64,200 last filing season to 18,700 this filing season. As a result, the average time to respond to questions is down from 3 days last filing season to 1.2 days in the 2005 filing season and the accuracy rate in responding to questions has improved from 64 percent last year to 86 percent this filing season. IRS intended to discontinue this program for the 2006 filing season for taxpayers residing in the United States because questions can be answered more efficiently if handled via the telephone. However, due to congressional concerns, IRS now plans to keep the program. IRS’s Web site experienced extensive use this filing season based on the number of visits to the Web site, pages viewed, and forms and publications downloaded. As of August 31, 2005, the Web site had been visited about 169 million times and users viewed about 1.2 billion pages. This year is the first year that IRS is publicly reporting these figures. Further, as of August 31, 2005, about 150 million forms and publications had been downloaded via the IRS Web site. IRS’s Web site continues to provide two very important tax service features that were used extensively by taxpayers: (1) “Where’s My Refund” enables taxpayers to check on the status of their refund and for the first time this year allows a taxpayer whose refund was returned as undeliverable mail to change their address and (2) Free File provides taxpayers the ability to file their tax return electronically for free. As of August 31, 2005, 28.5 million taxpayers had accessed the “Where’s My Refund” feature, about a 24 percent increase over the same time period last year. As of September 16, 2005, over 5 million tax returns had been filed via Free File, which represents a 46.2 percent increase over the same time period last year. For the first time this year, all individual taxpayers were eligible to file for free via IRS’s Web site. The performance of IRS’s Web site is consistent with IRS’s strategy to improve taxpayer service by providing options for automated interaction with IRS. IRS currently lacks, but is developing, long-term goals for taxpayer services, tax enforcement, and modernization. We have reported on lack of such goals in past reports in each of these three areas. Similarly, a 2004 Program Assessment Rating Tool (PART) review conducted by the Office of Management and Budget found that IRS lacks long-term goals, not just for filing season activities, but for all aspects of its operations. PART asks, for example, whether a program’s long-term goals are specific, ambitious, and focused on outcomes, and found that IRS did not meet the criteria. IRS has been working to establish long-term goals as part of its strategic planning efforts for all aspects of its operations for well over a year. However, at this time IRS does not have a schedule for finalizing its long- term goals. According to federal law and good management practices, as part of its strategic planning, a executive agency should not only have annual performance goals for each program, but these annual goals should be linked to long-term goals that set longer term and broader expectations for how an agency should be accomplishing its mission. While these long-term goals do not necessarily need to be quantifiable, they should be sufficiently focused on results or outcomes to provide the agency’s management and Congress with information not only prospectively—i.e., how well the agency expects to perform, but retrospectively as well—i.e., how close actual performance is to expectations. This information holds agencies accountable and helps agencies and Congress make strategic trade-offs. Long-term goals can help an agency meet its goals by setting targets and providing incentives to determine whether annual goals contribute to long-term progress; identify gaps in performance or misaligned priorities; consider new strategies to improve service in the future, especially since these strategies could take several years to implement; and provide a framework for assessing budgetary trade-offs—for example, for IRS, between taxpayer service and enforcement on an annual basis and over the longer term. Long-term goals are a component of the statutory strategic planning and management framework that Congress adopted in the Government Performance and Results Act of 1993 (GPRA). GPRA requires executive agencies to develop a strategic plan with long-term, results- or outcome- oriented goals and objectives for all major functions and operations. Furthermore, each long-term goal must be linked to annual performance goals, which should be quantifiable, i.e., should indicate whether or not incremental progress is being made toward the long-term goal. IRS has taken some steps toward meeting GPRA’s criteria for strategic planning. IRS has established a strategic plan and associated strategic and annual performance goals. The strategic goals, which are qualitative and descriptive, are long-term goals in the sense that they represent IRS’s vision for the next 5 years. IRS’s Strategic Plan for fiscal years 2005-2009 describes IRS’s three strategic goals for 5 years hence: (1) improve taxpayer service, (2) enhance enforcement of tax laws, and (3) modernize IRS through its people, processes, and technology. The plan includes strategies and means for achieving the strategic goals, such as reducing face-to-face assistance and increasing less expensive ways of interacting, i.e., electronic interactions such as IRS’s Web site. IRS’s strategic goals, however, lack specific targets against which progress can be measured. More specifically, IRS’s strategic goals do not spell out where IRS wants to be in the future with respect to levels of taxpayer service or enforcement. In contrast, IRS has one long-term goal—for electronic filing—which is quantitative. Because it is specific, it is useful for identifying gaps between actual and intended performance and measuring progress toward the goal. We recognize that developing long-term goals that meet the above criteria is difficult. Not all goals may be as easily quantified as the goal for electronic filing. Because of the difficulty, IRS has experienced delays in finalizing its proposed goals. In our April 2005 testimony, we stated IRS reported that the goals would be finalized and publicized before May 2005. However, as of October 2005, IRS lacked a schedule for the public release of long-term goals. If long-term goals are not in place in a timely manner in 2006, Congress and IRS management will be less informed about budgetary trade-offs between improving taxpayer service and enhancing enforcement. Such trade-offs, as we have noted before, involve risk. One risk is surrendering some of the gains that have been made in taxpayer service. IRS has taken numerous actions to address the aftermath of Hurricanes Katrina and Rita, including assessing employee and infrastructure needs, providing tax relief, and providing assistance to federal partners. IRS officials report that any effect on this year’s filing season performance was slight because the hurricanes occurred so late in the filing season. IRS is also assessing the longer term implications of the hurricanes for the 2006 filing season and beyond. According to IRS officials, IRS followed mandated procedures, which focus on the impact to employees, critical business processes, and computer systems. IRS established an Emergency Command Center in Nashville, Tennessee, to deal with immediate issues in the field related to employee safety and assistance, damage to facilities and equipment, and security of taxpayer data and other IRS records. The center maintained ongoing communications with the highest levels of IRS management, including the two deputy commissioners, providing daily reports on the impact of the disaster and recovery process. IRS planned to close the center by mid- September 2005. IRS located and contacted all 517 employees in the affected areas. Many have returned to work at sites that have been reopened or alternative locations. A vital part of IRS’s response to any disaster is its support of other federal agencies and stakeholders. IRS worked with the Federal Emergency Management Agency (FEMA) and the General Services Administration to inspect the buildings, determine if and when those facilities would be operational, and obtain replacement space for the offices closed indefinitely. IRS reopened offices in all but two locations (Gulfport, Mississippi, and New Orleans, Louisiana) in September and plans to reestablish workload inventories at those offices. IRS plans to reopen offices in Gulfport and New Orleans after November 4, 2005. Finally, IRS had four offices closed as a result of Hurricane Rita, all of which were reopened by the end of October 2005. In response to Hurricane Katrina, IRS has assigned employees to work in approximately 30 disaster recovery centers including in Alabama, Mississippi, and Texas; assigned nearly 5,000 employees to augment federal telephone call sites; and called back 4,000 seasonal employees to minimize the disruption to ongoing IRS work. IRS gave priority over its regular telephone service to help disaster victims with the FEMA registration process whereby people call in and provide IRS employees with basic information such as their name, address, and property damage. IRS officials estimated that IRS staff may handle up to 50 percent of these FEMA calls. As of September 18, 2005, IRS had answered over 384,000 telephone calls for FEMA, which was about 65 percent of all calls at the time. In a letter commenting on a draft of this report, the Commissioner noted that as of the end of October, IRS answered over 786,000 disaster-related calls. Besides FEMA, IRS was the only other federal agency using its own facilities and employees to answer these calls. IRS’s actions to safeguard taxpayer data include working with external groups such as the Federal Protection Service and General Services Administration to secure facilities and assess operational capability. According to IRS officials, they are implementing the best practices learned from Hurricane Andrew and the September 11th attack, retrieved archived documents, and used many of the managers and employees who were involved in these prior events to support the current efforts. IRS took numerous actions to provide broad relief to affected taxpayers including postponing deadlines for filing and payment, providing relief from interest and penalties, waiving some low-income housing tax credit rules, waiving the usual fees and expedite requests for copies of previously filed tax returns for affected taxpayers who need them to apply for benefits or to file amended returns claiming casualty losses, and encouraging widespread use of leave donation programs for disaster victims. IRS communicated this and other information via a series of news releases and notices. In addition, IRS established a special toll-free disaster number to handle taxpayer inquiries and launched a special section on its Web site to provide information on tax relief and related issues. IRS also coordinated with the Department of Labor to expedite filing verifications and with the U.S. Postal Service to locate and redirect mail to the affected area. IRS temporarily suspended correspondence and compliance activities in the affected areas; additional guidance was pending at the time we concluded our work. Also, IRS has partnered with the Associated American Institute of Certified Public Accountants to provide outreach to affected taxpayer disaster recovery centers, and has coordinated with the Federation of Tax Administrators to provide assistance to impacted states. IRS also is assessing the longer term implications of Hurricanes Katrina and Rita for the 2006 filing season and beyond, which was complicated by the number of taxpayers involved, dispersion of those taxpayers across the country, and unanticipated computer programming and other business changes that need to be made in response to legislation under relatively tight time frames. Regarding the 2006 filing season, according to IRS officials, IRS’s actions, including using seasonal employees to answer IRS calls, should help minimize disruption to telephone service in particular while other employees assist FEMA in answering emergency calls. In recent years, IRS has significantly improved its filing season services to taxpayers. The trend continued this year in several areas, such as telephone accuracy. However, because of overall budget constraints and its strategy of shifting resources from service to enforcement, IRS will be challenged to continue improving service. In principle, IRS could shift resources from service to enforcement while maintaining or improving the quality of service to taxpayers if it can provide service more efficiently. But there is risk that this strategy could result in surrendering some of the past gains in taxpayer services. In practice, however, IRS has been able to shift resources and realize noticeable efficiency gains. IRS’s efficiency gains can be linked, in part, to management’s focus on results, performance measurement, and in the case of electronic filing, progress towards its long-term goal. We identified two areas where additional information might lead to better informed decision making about how to continue improving IRS’s performance. The first area is electronic filing. Despite numerous IRS initiatives that have increased electronic filing, there remains considerable room for further growth. Some states and federal tax experts have recognized that mandatory electronic filing for certain categories of tax practitioners is the one remaining option with the potential for significant impact. However, mandatory electronic filing would likely impose some costs and burdens on tax practitioners. Better information about the nature and magnitude of these costs and burdens would provide more facts about the pros and cons of mandatory electronic filing. The second area is long-term goals. Without agency wide long-term goals that are concrete and as quantifiable as possible, it is difficult to assess IRS’s progress and budget requests. To address the problems with meeting its long-term electronic filing goal and needing time frames for developing and publicizing long-term goals, we recommend that the Commissioner of Internal Revenue direct the appropriate officials to develop better information about the costs to tax practitioners and taxpayers of mandatory electronic filing of tax returns for certain categories of tax practitioners and establish a schedule for developing its long-term goals. The Commissioner of Internal Revenue provided written comments in a November 4, 2005, letter outlining IRS’s view of its 2005 filing season performance in return processing, telephone service, walk-in service, volunteer return preparation, and Internet services, which is reprinted in appendix III. The Commissioner wrote that he appreciated our recognition of IRS’s successes for the 2005 filing season, which he characterized as one of the most successful ever for IRS. He stated that IRS was able to balance its resources to focus on both service and enforcement and provide customer service through detailed planning, improved efficiencies, and the dedication of IRS staff. However, he also recognized room for improvement. The Commissioner agreed with both of the report’s recommendations. In responding to our first recommendation to develop better information about the costs of mandatory electronic filing of returns for certain categories of tax practitioners, the Commissioner stated that IRS would initiate a study to analyze the relationship of state-mandated electronic filing requirements to the federal electronic filing rate. Regarding the second recommendation for IRS to establish a schedule for developing its long-term goals, the Commissioner stated that IRS had initiated efforts to develop long-term, outcome-oriented goals and would establish a schedule for developing these goals by the end of the calendar year 2005. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from the date of the report. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Finance, the House Committee on Ways and Means, and the Ranking Minority Member, Subcommittee on Oversight, House Committee on Ways and Means. We are also sending copies to the Secretary of the Treasury; the Commissioner of Internal Revenue; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-9110 or at whitej@gao.gov. Contacts points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report include Emily Byrne, Evan Gilman, John Lesser, Alan Patterson, Cheryl Peterson, Neil Pinney, Amy Rosewarne, Joanna Stamatiades, and Daniel Zeno. As table 3 shows, the Internal Revenue Service (IRS) nearly met or exceeded eight of the nine processing performance goals for 2005. For five measures (refund timeliness, deposit error rate, letter error rate, productivity, and efficiency), IRS exceeded its goal. For three of the remaining measures (refund error rate, deposit timeliness, and notice error rate), IRS nearly met or met its goal. For one measure, refund interest paid, IRS did not meet the goal, according to IRS officials, because of an unanticipated but substantial increase in the interest rate. Comparing actual 2005 performance to 2004 performance shows that IRS’s performance improved or remained about the same for seven of the eight measures, again with the exception of refund interest paid. Table 4 also shows that IRS processing performance in 2005 has improved compared to 2002 performance for all but one of the measures that could be compared. IRS’s fiscal year 2005 budget was approximately $10.2 billion which funded approximately 96,400 full-time equivalents (FTE). The taxpayer services accounted for about $3.6 billion (35 percent) of the entire IRS budget. The remaining budget was used to fund various operations such as examination, collection, investigations, and business systems modernization. From fiscal year 2004 through fiscal year 2005, IRS received a slight budget reduction in taxpayer service of about $103 million (2.8 percent) as shown in table 4. Although IRS officials stated that the reduction would have had minimal impact on taxpayer service during the 2005 filing season, our analysis of IRS’s performance measures showed some impact on service, most notably in the area of telephone access. IRS also absorbed budget reductions for its volunteer and Web site operations, with minimal impact on taxpayer service, according to IRS officials. In both these areas, however, officials stated that future budget reductions could have a negative impact on taxpayer service. As discussed in the report section on long-term goals, long-term goals could help IRS decision makers decide how to best allocate resources during times of budget reductions. Direct Costs. About $18 million of the $103 million budget reduction shown in table 4 was a reduction in direct costs and these reductions did have some impact on taxpayer service, primarily telephone service. Support Costs. Most of the $103 million reduction, about $85 million, was in support costs. Support costs are composed of both indirect costs and overhead costs such as rent, management, information services, legal services, and security. According to IRS officials, while large, this reduction did not impact taxpayer service because the services are not directly related to the funding of IRS programs. We examined those budget adjustments that we believed could have significantly affected the filing season activities we review annually. We found the following: Tax return processing. Processing received a slight overall budget reduction of direct funds of $7.6 million, about 1 percent, in fiscal year 2005. In particular, Submission Processing received a direct reduction of $11 million. IRS absorbed this reduction by allowing some management contracts to expire because they were no longer needed due to the consolidation of paper processing operations. Additionally, the Electronic Tax Administration, which is responsible for advertising electronic filing, received a marketing budget reduction of approximately $7.6 million (40 percent) for the 2005 filing season. In spite of the budget reduction, the number of tax returns filed electronically increased 11 percent from 2004. However, IRS officials are becoming increasingly concerned about the potential impact of future reduction on their ability to increase electronic filing. Telephone services. Perhaps the most significant impact of the budget reduction was in the area of telephone services. According to IRS officials, IRS’s telephone services received a direct budget reduction of $5 million. As a result, taxpayers’ ability to talk to a customer service representative (CSR) was more limited than the year before, their wait time increased, and more taxpayers hung up before speaking with a CSR. Walk-in & volunteer sites. IRS’s budget for walk-in sites remained stable, and due to congressional concerns, plans to close some walk-in sites in 2006 are on hold. For its volunteer sites, IRS shifted resources from taxpayer service to enforcement, resulting in an overall reduction in the Stakeholder Partnership, Education, and Communication (SPEC) budget of about $3 million. SPEC absorbed approximately $2 million of the budget reduction by implementing a voluntary reassignment program that allowed 28 SPEC staff, including 10 front-line managers, to transfer to enforcement work. Although SPEC had planned to reorganize its field management structure for the 2006 filing season as a result of changes made in 2005, as with the walk-in sites, it no longer plans to do so. Also, IRS officials stated that future budget reductions could impede sustainable growth and negatively impact taxpayer service in the future because their model of leveraging resources relies on partnerships and networking opportunities. Web site. Web Services, which oversees IRS’s Web site, received an overall budget reduction of approximately $4 million (10 percent) in 2005. As a result, Web Services reduced some contract services. Officials believe that because Web site use has increased annually, its budget should grow to keep pace with the increase. However, they expressed concern that future reductions could negatively impact the Web site’s performance.
During the filing season, the Internal Revenue Service (IRS) processes about 130 million individual tax returns, issues refunds, and responds to millions of inquiries. Budget cuts combined with IRS's strategy of shifting resources from taxpayer service to enforcement make providing quality service a challenge. GAO was asked to assess IRS's 2005 filing season performance compared to past years and 2005 goals in the processing of paper and electronic tax returns, telephone service, face-to-face assistance, and Web site service. GAO also examined whether IRS has long-term goals to help assess progress and guide in making decisions. Finally, GAO summarized IRS's response to Hurricanes Katrina and Rita, and their possible effects on IRS's performance. IRS improved some filing season services. According to officials, IRS made a strategic decision to reduce others to accommodate budget cuts. IRS's processing of returns and refunds went smoothly. Accuracy of responses to telephone inquiries about tax law and about taxpayers' accounts significantly improved. And, IRS's Web site performed well. On the other hand, in response to budget cuts, IRS reduced access to telephone assistors, resulting in longer wait-times and more callers hanging up. IRS officials viewed telephone access as a more flexible area for absorbing budget cuts than, for example, processing. The number of taxpayers visiting IRS walk-in sites continued to decline, while the number of tax returns prepared at volunteer sites increased. This is consistent with IRS's strategy of reducing the number of its employees providing expensive face-to-face assistance. IRS continues to lack reliable data on the accuracy of walk-in and volunteer site assistance but has plans in place to improve quality measurement. For the first time, more than half of individual tax returns were filed electronically, which is important because electronic filing has allowed IRS to reduce resources devoted to processing paper returns. However, despite IRS's actions to promote electronic filing, it is not on track to achieve its long-term goal of having 80 percent of such returns filed electronically by 2007. State mandated electronic filing has proven effective at encouraging electronic filing at the federal level and one IRS advisory group has recommended a federal mandate. However, little is known about the costs and burdens of such mandates. IRS has been developing long-term goals to help assess agency progress and understand the impact of budget decisions. Because of the difficulty in developing goals, IRS has experienced delays and lacks a schedule for finalizing those goals. IRS is taking numerous actions to assist taxpayers affected by Hurricanes Katrina and Rita. Most of the impact on IRS, such as more questions from taxpayers, will be felt during the 2006 filing season and beyond.
CMS’s goals for Nursing Home Compare and its Five-Star System are consistent with its strategy to improve the quality of health care by providing transparent information about the quality of health care services, including those delivered in nursing homes. According to the strategy, to achieve better care, patients must be given access to understandable information and decision support tools that help them manage their health and navigate the health care delivery system. Since 1998, CMS has publicly reported information on nursing home quality on its Nursing Home Compare website and has increased the amount of information reported on the website over time. CMS initially reported information only about nursing home characteristics and nursing home health inspection results on Nursing Home Compare. Later, CMS began reporting additional information on the website, such as the ratio of nursing staff to residents, nursing homes’ performance on various quality measures, and the number of complaints registered against nursing homes. Additionally, CMS has updated the appearance and functionality of the Nursing Home Compare website over time, with the most significant change being the introduction of the Five-Star System in 2008. In December 2008, CMS made the Five-Star System available to the public on its Nursing Home Compare website in order to help consumers compare nursing homes more easily. The Five-Star System assigns each nursing home participating in the Medicare or Medicaid programs an overall “star” rating, ranging from one to five. Nursing homes with five stars are considered to have much above average quality, while nursing homes receiving one star are considered to have much below average quality. Calculation of the overall star rating is based on separate ratings that nursing homes receive for each of three components: health inspections, staffing, and quality measures. Health inspection rating. CMS contracts with state survey agencies to conduct unannounced, on-site nursing home health inspections— known as standard surveys—to determine whether nursing homes meet federal quality standards. Every nursing home receiving Medicare or Medicaid payment must undergo a standard survey not less than once every 15 months, and the statewide average interval for these surveys must not exceed 12 months. State surveyors also conduct complaint investigations in response to allegations of quality problems. If nursing homes are found to be out of compliance with any requirements, state surveyors issue deficiency citations that reflect the scope (number of residents affected) and severity (level of harm to residents) of the deficiency. Surveyors conduct revisits to the nursing home to ensure that the deficiencies identified have been corrected. A nursing home’s health inspection rating is relative to other nursing homes’ health inspection ratings in their state. As such, health inspection ratings are assigned to generally achieve the following distribution within each state: the top 10 percent of nursing homes receive five stars, the bottom 20 percent receive one star, and the middle 70 percent of nursing homes receive two, three, or four stars. Staffing rating. Nursing homes self-report staffing hours worked for a 2-week period at the time of the standard survey. CMS converts the reported point-in-time staffing hours for nursing staff—registered nurses, licensed practical nurses, and certified nursing assistants— into measures that indicate the number of registered nurse and total nursing hours per resident per day. CMS adjusts the staffing levels for differences in the level of complexity of nursing services required to care for residents across nursing homes—referred to as resident acuity. Each nursing home’s staffing rating is assigned based on how its total nursing and registered nurse staffing levels compare to the distribution of staffing levels for freestanding homes in the nation and staffing level thresholds identified by CMS. Quality measure rating. Nursing homes regularly collect assessment information on all their residents, including information on the residents’ health, physical functioning, mental status, and general well-being. Nursing homes self-report this information to CMS. CMS uses some of the assessment information to measure the quality of certain aspects of nursing home care, such as the prevalence of pressure sores and changes in residents’ mobility. At the time of our analysis, CMS calculated this rating for each nursing home based on 11 of the 18 quality measures posted on Nursing Home Compare. Information on the remaining 7 quality measures is posted on the website but not used in the calculation of the rating. A nursing home’s quality measure rating is assigned based on national thresholds established by CMS. The overall star rating is calculated using a process that combines the star ratings from the health inspection, staffing, and quality measure components—with the greatest weight given to the health inspection rating. The overall rating is assigned based on the following steps: 1. Start with the number of stars for the health inspection rating. 2. Add one star if the staffing rating is four or five stars and also greater than the health inspection rating. Subtract one star if the staffing rating is one star. The overall rating cannot go above five stars or below one star. 3. Add one star if the quality measure rating is five stars. Subtract one star if the quality measure rating is one star. The rating cannot go above five stars or below one star. See figure 1 for an example of how a nursing home’s overall rating is calculated. CMS updates the ratings on a monthly basis; however, a particular home’s overall rating will only change if it had new data that affected any one of the component ratings. For example, when a home has a health inspection survey, either a standard survey or a complaint investigation, the deficiency data from the survey will become a part of the calculation for the health inspection rating and the overall rating will also be adjusted, if necessary. We found that CMS utilizes three standard mechanisms for collecting information on the use of the Nursing Home Compare website: website analytics, website user surveys, and website usability tests. Website analytics. CMS utilizes website analytics to gauge the performance of Nursing Home Compare and improve the visibility of the website in search engine listings. Through this mechanism, CMS is able to track data such as how many users, sessions, bounce rates, and page views Nursing Home Compare has per year. For example, these data show that from 2013 to 2015, Nursing Home Compare averaged 1.5 million sessions per year and 914,000 users per year. The website analytics also track the average session duration and the average number of pages that are viewed per session. For this same time period, the average session duration was 5.8 minutes and the average number of pages that were viewed per session was 4.8. Website user surveys. CMS utilizes website user surveys to collect information about Nursing Home Compare users, how they use the site, and their opinions about the site. According to CMS officials, these surveys, which CMS began using in 2013, randomly pop up in web browsers to 50 percent of the website’s visitors. The surveys ask the user to identify themselves (for example, a caregiver or a researcher), the primary purpose for visiting the site, and the user’s experience in using the site. These surveys provide the only way CMS determines the type of users who come to the website, according to CMS officials. In October 2015, website survey data showed that 59 percent of users of Nursing Home Compare identified themselves as consumers, and the majority of users report coming to the site to research or select nursing homes for themselves or a family member. Website usability tests. CMS utilizes usability tests—in the form of one-on-one sessions with nine consumer participants—to assess how well consumers navigate the website. CMS has conducted four usability tests from 2011 through 2015. The tests focus on the navigability of the website; however, they also include a few background questions about consumer use. For example, the usability tests ask if participants were previously aware of Nursing Home Compare, what factors consumers find most important in searching for a nursing home, and what expectations consumers have for a nursing home comparison website such as this. In addition to these three standard mechanisms, CMS officials also told us that they gain insight into the use of Nursing Home Compare by holding ad-hoc meetings with a variety of stakeholders that are familiar with Nursing Home Compare to discuss the website. CMS held three stakeholder meetings from 2010 through 2015. According to CMS documents, stakeholders have included groups that represent consumers, such as ombudsmen, consumer advocate groups, provider advocate groups, and others that are involved in nursing home services. Information exchanged during stakeholder meetings includes CMS presentations on pending changes, such as the development of new quality measures, and stakeholder feedback. The mechanisms that CMS utilizes to collect information about the use of Nursing Home Compare provide the agency with valuable information. However, these mechanisms do not provide CMS with information on the usefulness of the website to a broader range of consumers. Specifically, the usability tests are not designed to assess the website’s usefulness to consumers, and the website analytics and user surveys only provide information about consumers who access the website. Therefore, the mechanisms do not provide CMS with information on nursing home consumers who have not used the website because they are unaware of it, or choose not to use it, as well as the reasons why. In stakeholder interviews we conducted, some nursing home stakeholders noted that many consumers do not know about the website and that consumers collect information from other sources. Obtaining information from consumers who do not access Nursing Home Compare would likely require the dedication of resources to, for instance, consumer-oriented focus groups or broader surveys. We identified five key areas of improvement CMS could make to Nursing Home Compare in order to make it more helpful for consumers. Specifically, we reviewed over 300 individual improvements identified in CMS documents—in part resulting from the mechanisms described above—and in interviews with national and state stakeholders; for example, one internal CMS analysis included over 40 individual recommendations for improvement. Through our analysis, we found the key areas of improvement are: 1) explanation of how to use the website, 2) additional information about the nursing home, 3) community and consumer outreach, 4) clarity of the website, and 5) navigability of the website. Table 1 below provides more information about these key areas of improvements. For example, the first improvement addresses the fact that the Nursing Home Compare website does not currently have an explanation of how to use the website prominently displayed on its home page; there is not an introduction to the website, or an obvious explanation of how it should be used. According to many stakeholders, Nursing Home Compare is a valuable tool for consumers but a few specified that additional explanatory information is needed; without such information, the usefulness of the website may be limited. Although CMS has identified the need for improvements to its Nursing Home Compare website, the agency does not have a systematic process that prioritizes recommended website changes and sets a timeline for implementation. In response to a recommendation in our 2012 report, in August 2013, CMS developed a strategic plan for evaluating the usability of Nursing Home Compare. The plan described tasks, including an expert review of the website, an analysis of competitor websites, and usability testing, some of which resulted in the formal mechanisms that CMS now has in place to collect information on the use of Nursing Home Compare, as previously described. However, CMS does not have a documented and systematic approach describing how to prioritize recommended changes to the website and assessing the potential improvements. Instead, officials described a fragmented approach to reviewing and implementing recommended website changes that may include verbal discussions of various factors, such as which changes would provide the broadest impact. CMS officials stated that their current approach to handling website changes had been working well, but since the website has become more complex in recent years, they acknowledged the need for a more formalized approach in addressing identified website changes. CMS has stated the goal for its Nursing Home Compare website as assisting consumers in finding and comparing information about nursing home quality. In addition, under federal internal control standards, management should address identified program deficiencies on a timely basis and evaluate appropriate actions for improvement. However, in the absence of an established process to evaluate and prioritize implementation of improvements, CMS cannot ensure that it is fully meeting its goal for the website. Our analysis for the Five-Star System’s ratings data found that its overall rating provided consumers with distinctions between the highest and lowest performing nursing homes for health inspections in most states. Specifically, we found that, in 37 out of 50 states, homes that received an overall rating of 5-stars consistently had higher health inspection scores— the component measure that most significantly contributes to the overall rating—than homes that received an overall rating of 1-star. This means that in the 37 states, consumers can safely assume that, in the case of health inspections, the performance of any nursing home in their state with an overall high 5-star rating is better than the performance of any home with an overall low 1-star rating. Some stakeholders we spoke with agreed that distinctions between nursing homes are clearest at the extremes. For example, one stakeholder noted that the Five-Star System is best at helping consumers identify the poorest performing homes to avoid. Stakeholders also noted the value of having a national resource that uses standardized and objective nursing home quality information. However, we also identified four factors that may inhibit the ability of consumers to use the Five-Star System ratings as an easy way to understand nursing home quality and identify high- and low- performing homes, CMS’s stated goal for the Five-Star System. 1. Interpreting overall ratings. As previously described, the Five-Star System’s overall rating is calculated using a process that combines three component ratings. However, the formula for combining the components is not intuitive, which can make interpreting overall ratings difficult for consumers by both complicating the comparison of overall ratings and masking the importance of the component ratings. Specifically, the comparison of overall ratings can be complicated because a consumer cannot assume that the performance on a particular component of the higher-rated home is better than that of the lower-rated home. In our review, we generally did not find distinctions in the scores for homes in the same state with adjacent overall ratings—e.g., 2- and 3-star homes—or for homes with middle overall ratings—2-, 3-, and 4-star homes. For example, in one state, 28 percent of homes with a 3-star overall rating had a better health inspection score than the average health inspection score for homes with an overall 4-star rating. Furthermore, the way CMS calculates the overall rating can mask for consumers issues that may be present in the component ratings. A consumer comparing nursing homes will see each home’s overall rating and component ratings, but they may not understand the impact each component score has on the overall rating. This could lead a consumer to rely more on the overall rating when their individual needs may require attention to one specific component more than the others. For example, two nursing homes that both have a 4-star overall rating could have opposite quality measure component ratings—one with a low 1-star quality measure rating and the other with a high 5-star quality measure rating. (See fig. 2). Many stakeholders stated that it is difficult to distinguish between nursing homes with adjacent or middle ratings. In addition, some stakeholders expressed concern about the overall rating, with one explaining that often consumers make decisions based on the overall rating without understanding it or looking at the underlying components. According to CMS officials, the overall rating provides a summary of complex information to guide consumers—not an explicit report card—in as simple a way as possible. They also added that by providing individual component ratings, consumers have the ability to dig deeper into the source of the overall rating. 2. Timeliness of ratings data. Each of the three rating components— which influence the overall star rating—use a unique source of data that are collected from nursing homes at different time intervals. Specifically, the number of stars assigned to a nursing home is a point-in-time picture of performance based on a prior snapshot of the home’s performance and may not reflect a nursing home’s current status. (See table 2). Some stakeholders we spoke with expressed concern that a consumer may make a decision about a nursing home based on data that does not reflect current conditions in the home. According to CMS officials, a delay is always present due to administrative processes such as validating data prior to being posted. For example, the health inspection component data may be delayed due to additional information from the outcomes of revisits to the nursing home to check that deficiencies have been corrected. CMS officials and stakeholders said the Five-Star System should not be the only source of information a consumer uses; they both encourage consumers to explore additional information including visiting the home. 3. Comparing nursing homes across states. The overall rating and health inspection rating do not allow consumers to compare the quality of homes across states, limiting the ability of the rating system to help consumers who live near state borders or have multistate options where they could place their family members. Because ratings are relative to other nursing homes within a state, homes that receive the highest and lowest ratings in their state may not be the highest or lowest performing homes in another state or nationally. So, a consumer cannot assume that a 5-star nursing home in one state would be rated as a 5-star home in any other state. (See fig. 3). Furthermore, we found that when we recalculated the star ratings using a national distribution rather than a state distribution, homes’ ratings often changed, sometimes dramatically. For example, about 23 percent of nursing homes with a 1-star overall rating in December 2015 had improved ratings when compared nationally and about 30 percent of homes with a 5-star overall rating had decreased ratings when compared nationally. When looking at individual states, we found that the nursing homes in some states would fare better or worse under a national rating. Specifically, the percentage of homes receiving an overall 1-star rating doubled in 4 states and the percentage of homes receiving an overall 5-star rating doubled in 9 states. See appendix III for additional information about the results of our analysis. According to CMS Five-Star System documentation, the rating system is not designed to compare nursing homes nationally. Instead, ratings are only comparable for homes in the same state. CMS made the decision to base the health inspection component on the relative performance of homes within the same state primarily due to variation across the states in the execution of the standard surveys. Because the health inspection component most significantly contributes to the overall rating, this means that the overall rating also cannot be compared nationally. However, the addition of national ratings would be helpful for consumers and we have previously made recommendations to CMS that would help decrease survey variation across states. CMS has taken action on many of these recommendations. 4. Lack of consumer satisfaction information in ratings. Because the Five-Star System does not include consumer satisfaction information—a key quality performance measure—the rating system is missing important information that could help consumers distinguish between high- and low- performing nursing homes. We believe consumer satisfaction surveys could be a more direct measure of nursing home satisfaction than other available measures. For example, our analysis of consumer satisfaction data shows that nursing homes with higher overall star ratings did not necessarily have higher resident satisfaction scores or fewer complaints. (See fig. 4). Specifically, our analysis found that the Five-Star System overall ratings for each nursing home in two states that conduct resident satisfaction surveys were only slightly correlated with the percentage of residents that would recommend the home to other consumers—an indicator of consumer satisfaction included on the state surveys. Similarly, when analyzing complaint data for all states—a proxy for consumer satisfaction—we also found only a slight correlation between the total number of consumer complaints registered against a home in each state and the home’s overall Five-Star System rating. Many stakeholders told us that they would like to see resident satisfaction included in the Five-Star System. For example, one state stakeholder group explained that they think it is important for a consumer making a nursing home decision to understand how the administration resolves an issue with a resident when one arises. That type of information is not currently captured in the Five-Star System, but could be captured through a resident satisfaction survey, which could strengthen the ratings. According to CMS officials, they recognize that consumer satisfaction is important information, but collecting the data in a consistent, objective way for all of the nursing homes in the country is a challenge. They acknowledged that some states have been able to overcome these implementation challenges and administer statewide nursing home consumer satisfaction surveys. Until consumer satisfaction information is included in the rating system, consumers will continue to make nursing home decisions without the benefit of this key performance measure and may not be choosing the home that would best meet their needs. While we recognize that gathering this information is challenging, CMS has done so in its hospital rating system. Specifically, CMS developed a hospital consumer satisfaction survey with assistance from HHS’s Agency for Healthcare Research and Quality—an agency that, among other things, focuses on quality measurement and includes consumer satisfaction as one of its National Quality Measures Clearinghouse’s clinical quality measures. In addition to the items discussed above, presentation of the Five-Star System does not prominently display key explanatory information that could help consumers better understand how to use the ratings. Specifically, we found that CMS does not prominently provide descriptions of how to understand the ratings and what consumers should consider when using the ratings or information on how the overall rating is calculated. In addition, CMS clearly discloses the date of the data used to assign stars for the health inspection component, but not for the staffing or quality measure components, and does not prominently state the previously discussed limitation that homes can only be compared within a state. For example, in order to find descriptions of how the overall rating is calculated, consumers must follow links that take them off of their nursing home search and results webpages and, as noted previously, an average webpage visit is less than 6 minutes. Many stakeholders we spoke with explained that consumers often have very little time to make a nursing home decision, and a few noted that it is also a stressful process, therefore making prominent and readily available information crucial. In addition, many stakeholders expressed concerns that consumers may not understand the ratings and how they are calculated. Further, many stakeholders expressed concern about the timeliness of the data, with some noting that consumers were generally unaware of the timing of the data. CMS officials described the tension between keeping the Five-Star System as simple as possible for consumers so that they can quickly understand the ratings and also providing enough information on how and when the ratings are calculated. Collectively, the four factors that hinder consumers’ ability to use the Five-Star System ratings, along with the lack of explanatory information provided by CMS, may limit the Five-Star System’s ability to meet CMS’s goal of providing consumers with an easy way to understand nursing home quality and make distinctions between high- and low- performing homes. Nursing Home Compare and the Five-Star System seek to help consumers choose among nursing homes. Nursing home selection can be a stressful and time-sensitive process, so these are important tools that CMS makes available to the public. However, our review found opportunities for improvement in both the website and the ratings. CMS has given much attention to the website since its inception almost 20 years ago. For example, the agency has put in place mechanisms for reviewing the website’s use and has identified numerous improvements that could be made. However, without a systematic process for reviewing options and determining priorities for improvement—currently absent from their efforts—CMS is unable to ensure that the website is meeting its intended goal. A key component of the website is the Five-Star System, containing important quality information on every nursing home so consumers can differentiate between them and choose those that can best meet their needs. Because the Five-Star System contains multiple types of information, compiled from different sources, and has complexities inherent in ratings systems, it can be challenging for consumers to fully understand how to take advantage of the varied information it contains. Additional capability and information not currently included in the rating system could also benefit consumers trying to differentiate between high- and low- performing nursing homes—such as the ability to compare homes nationally and the addition of consumer satisfaction survey information. In addition, prominently displaying explanatory information on how to use the ratings, which does not require users to navigate off the nursing home search and results pages, could help address challenges consumers face when trying to understand the ratings. Absent such enhancements, CMS cannot ensure that the Five-Star System is fully meeting its stated goal of helping consumers easily understand nursing home quality and distinguish between high- and low- performing homes. To strengthen CMS’s efforts to improve the usefulness of the Nursing Home Compare website for consumers, we recommend that the Administrator of CMS establish a systematic process for reviewing potential website improvements that includes and describes steps on how CMS will prioritize the implementation of potential website improvements. To help improve the Five-Star System’s ability to enable consumers to understand nursing home quality and make distinctions between high- and low- performing homes, we recommend that the Administrator of CMS take the following three actions: add information to the Five-Star System that allows consumers to compare nursing homes nationally; evaluate the feasibility of adding consumer satisfaction information to the Five-Star System; and develop and test with consumers introductory explanatory information on the Five-Star System to be prominently displayed on the home page. Such information should explain, for example, how the overall rating is calculated, the importance of the component ratings, where to find information on the timeliness of the data, and whether the ratings can be used to compare nursing homes nationally. We provided a draft of this report to HHS for its review and comment. HHS provided written comments, which are reproduced in appendix V. In its comments, HHS described the history of the Nursing Home Compare website and the Five-Star System, improvements the agency has made to both, and concurred with three of our four recommendations. In particular, HHS concurred with our recommendations to establish a process for reviewing potential website improvements that describes how it will prioritize their implementation, evaluate the feasibility of adding consumer satisfaction information to the Five-Star System, and develop and test explanatory information on the Five-Star System to be displayed on the home page. HHS did not concur with our recommendation to add information to the Five-Star System that would allow consumers to compare nursing homes nationally. HHS indicated that because of state variation in the execution of standard surveys, it is difficult to compare homes nationally on the health inspection component. They also noted that the Five-Star System is just one of many factors consumers should use when selecting a nursing home. As we describe in this report, efforts have been and should continue to be made to reduce state variation in standard surveys. For example, CMS regional offices are tracking state differences in deficiency citations. We maintain that the ability for consumers to compare nursing homes nationally is critical to making nursing home decisions, especially for those consumers who live near state borders or have multistate options, and that our recommendation remains valid. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its date. At that time, we will send copies to the Secretary of HHS. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or clowersa@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This appendix describes additional details of the data analyses we conducted to examine the Five-Star Quality Rating System (Five-Star System). For this examination, we analyzed data from three sources. We analyzed Five-Star System data from the Centers for Medicare & Medicaid Services (CMS). These data provide detailed rating information on over 15,000 nursing homes included in the Five-Star System from the most recent full quarter available at the time of our analysis, which ended December 2015. Additionally, we analyzed CMS consumer complaint data for a six-month period ending in December 2015. We collected and analyzed these data for all 50 states and Washington, D.C. Furthermore, we collected and analyzed 2015 nursing home resident satisfaction survey data from two of our four selected states that collect survey data. We conducted the following analyses: 1. To determine the extent to which the Five-Star System provides consumers with information distinguishing between high- and low- performing nursing homes, we analyzed December 2015 data from CMS’s publicly available Five-Star Scores and Ratings data. Specifically, for each state, we determined the range of scores that underlie each component star rating: for health inspection we used the weighted all cycles score, for staffing we looked at the total adjusted staffing score, and for quality measures we calculated the total quality measure score—for each nursing home’s overall rating. We then determined whether, within each state and for each component rating, the scores of the worst performing 5-star nursing homes overlapped with the scores of the best performing 1-star nursing homes. We also conducted this analysis for each combination of the star ratings. 2. To determine the timeliness (or age) of the data used for each component Five-Star ratings for consumers viewing the ratings in December 2015, we analyzed data from CMS’s Five-Star Scores and Ratings data for that month downloaded from CMS’s website. Specifically, we calculated the average age of the data for each component rating at that point in time. For the health inspection component, we analyzed the standard survey date, but did not analyze the complaint investigation date because a meaningful average age cannot be calculated. 3. To determine the extent to which nursing homes’ ratings changed when compared nationally rather than compared only within each state, we analyzed December 2015 data from CMS’s Five-Star Score and Ratings data downloaded from CMS’s website. Specifically, we recalculated each nursing home’s health inspection and quality measure scores that are normally assigned based on state distributions so that they were based on a national distribution (new distribution allotments were based on CMS’s state distribution guidelines). We then recalculated each home’s overall rating using our new health inspection component rating, our new quality measure rating, and CMS’s staffing component rating. In addition, we analyzed the change in overall nursing home ratings when applying the methodology nationally. 4. To determine the relationship between nursing home satisfaction data and CMS’s Five-Star ratings, we did the following: a. We used complaints registered against nursing homes by residents, families, ombudsmen, or others as a proxy measurement of satisfaction. Specifically, we analyzed complaint data recorded in CMS’s Automated Survey Processing Environment Complaints/Incidents Tracking System from July 1 through December 31, 2015. We examined the last six months of 2015 to provide a fuller picture of each nursing home’s routine complaint levels. For each state and nationally, we determined the correlation between each nursing home’s total number of registered consumer complaints and its overall Five-Star rating. b. We used the results of 2015 nursing home resident satisfaction surveys from two of our selected states that collect such information. Specifically, we focused on the responses to whether the resident would recommend that nursing home to others as a measure of satisfaction. In one state, this measure was the actual percentage of residents that recommended the home and in the other state it was the ranking of the home based on residents’ responses. For both states, we determined the correlation between each nursing home’s resident response on the state survey and its overall Five-Star rating. The findings from this analysis cannot be generalized to other states. For all data used in these analyses, we interviewed knowledgeable officials and reviewed related documentation and based on these steps determined that the data were sufficiently reliable to explore the relationship between the overall rating and the component ratings, determine national rating distributions, assess consumer satisfaction information, and describe the age of the data. We held interviews with 30 nursing home stakeholders—eight national stakeholders and 22 state stakeholders from four states (Rhode Island, Georgia, Kansas, and California) we selected based on factors such as variation in geographic region and size (number of nursing homes). These stakeholders represent a range of provider groups, consumer groups, government agencies, and technical experts. We selected organizations in each state and nationally that are relevant to nursing home consumers and providers. Technical experts were identified by their prominence in the nursing home quality research field. In addition, some stakeholders we interviewed identified other groups that would be appropriate to interview. Our interviews included a set of questions regarding consumer use of Nursing Home Compare. Responses to these questions cannot be generalized beyond the stakeholders we interviewed. We found that stakeholders generally could not quantify the number of consumers who use Nursing Home Compare, but most speculated that consumers use the site “a little” to “somewhat” and a few stakeholders said that consumers use the website “a lot”. Some stakeholders thought the number of people using Nursing Home Compare was growing, and one stakeholder thought this was because people are generally trying to make more educated decisions about nursing home care. Another stakeholder thought this increase could also be a result of people using the Internet to look things up more frequently, nursing homes included. Some stakeholders noted that use of Nursing Home Compare probably differs depending on whether the patient is searching for care in an urban or a rural setting. Specifically, they stated that they think Nursing Home Compare is used more frequently in urban areas, where more nursing home options are available, compared to rural areas where there may be only one home in a town. See Table 3 below for summary of stakeholder responses. Some stakeholders stated that they believe the extent to which consumers use Nursing Home Compare may depend on the amount of time that the consumer has to research and pick a nursing home. For example, according to a few stakeholders, if someone’s family member is getting discharged from the hospital and needs to be placed in a nursing home immediately, the consumer is less likely to use Nursing Home Compare. In contrast, one stakeholder noted that if consumers are planning for the future and researching nursing homes before care is needed, then they are more likely to use Nursing Home Compare. We found that stakeholders were split in their responses about when consumers would typically use Nursing Home Compare – whether as an initial step in beginning their nursing home search or as a way to confirm recommendations obtained from others. Most stakeholders stated that the consumer using the website is most likely a family member—usually an adult child or grandchild—and rarely the individual in need of a nursing home placement. When asked how valuable the information provided on Nursing Home Compare is to consumers who are researching and choosing a nursing home, most stakeholders stated that it was “somewhat valuable”, and some said that they thought it was “very valuable”. One stakeholder said that the information was “of little value”. See Table 4 below for a summary of stakeholder responses. Some stakeholders stated that the information on Nursing Home Compare is a good place to start and may help consumers narrow down their search, but ultimately it is not likely to be the only source of information. Many stakeholders agreed that in addition to conducting online research, consumers should also always try to visit nursing homes in person before a making a decision. A few stakeholders stated that observing a nursing home and its current residents firsthand on any given day provides the most valuable information when making a decision. In addition, stakeholders noted that consumers also obtain information about nursing homes through other sources—primarily through word of mouth from friends, family, and neighbors, and from information provided by primary care physicians, hospital discharge planners, and local ombudsmen. Many stakeholders noted that in most cases though, the location of a nursing home is often the main determinant of where a family member is placed. Stakeholders mentioned that consumers also use third party, private websites, and in some states, such as California and Kansas, consumers may rely on websites with state-specific nursing home information. Some stakeholders thought that consumers used these other sources of information more often than Nursing Home Compare. A couple of stakeholders thought consumers preferred these third-party, private websites because some of them provide a more personalized experience and offer the opportunity to speak with someone on the phone, where CMS’s Nursing Home Compare does not provide either of those options. A couple other stakeholders thought that consumers may be more likely to use and trust Nursing Home Compare compared to these other private websites simply because it is a government website. A couple of stakeholders expressed concern about consumers using third-party, private websites because nursing homes may pay to be included, and so the website may not provide objective information on nursing home options for consumers. Additionally, these private websites often appear on search engine results before Nursing Home Compare, so consumers may use them before seeing and using CMS’s Nursing Home Compare website. Finally, stakeholders provided mixed responses regarding whether they suggested consumers use Nursing Home Compare when helping them search for nursing homes. For example, several stakeholders told us that they routinely referred consumers to the website, while another said that she would only direct consumers to Nursing Home Compare if they were not familiar with the area, or if did not have any time to spend on the nursing home search process. The stakeholder would ultimately recommend the consumer coming in and talking to her, and then would not use Nursing Home Compare at all. In addition to the contact name above, Linda Kohn, Director, Karin Wallestad, Assistant Director, Kathryn Richter, Analyst-In-Charge, Amy Andresen, Julianne Flowers, Shannon Smith, and Brienne Tierney made key contributions to this report. Also contributing were Jacques Arsenault, Wesley Dunn, Krister Friday, Rich Lipinski, Dae Park, Vikki Porter, and Steven Putansu.
Approximately 15,600 nursing homes participating in the Medicare and Medicaid programs provide care to 1.4 million residents each year. To help consumers make informed choices about nursing homes, CMS developed the Nursing Home Compare website, and on the site made available the Five-Star System, which rates homes on quality components. GAO was asked to assess the website and rating system as tools for consumers. GAO examined (1) the information CMS collects about the use of Nursing Home Compare, including its usefulness to consumers, and potential areas, if any, to improve the website, and (2) the extent to which the Five-Star System enables consumers to understand nursing home quality and make distinctions between homes. GAO reviewed CMS documents and interviewed CMS officials and national and a non-generalizable sample of state-level stakeholders from four states, selected on factors such as size. GAO also analyzed Five-Star System and consumer complaint data, and analyzed resident satisfaction data from two of the four selected states. GAO found that the Centers for Medicare & Medicaid Services (CMS) collects information on the use of the Nursing Home Compare website, which was developed with the goal of assisting consumers in finding and comparing nursing home quality information. CMS uses three standard mechanisms for collecting website information—website analytics, website user surveys, and website usability tests. These mechanisms have helped identify potential improvements to the website, such as adding information explaining how to use the website. However, GAO found that CMS does not have a systematic process for prioritizing and implementing these potential improvements. Rather, CMS officials described a fragmented approach to reviewing and implementing recommended website changes. Federal internal control standards require management to evaluate appropriate actions for improvement. Without having an established process to evaluate and prioritize implementation of improvements, CMS cannot ensure that it is fully meeting its goals for the website. GAO also found that several factors inhibit the ability of CMS's Five-Star Quality Rating System (Five-Star System) to help consumers understand nursing home quality and choose between high- and low- performing homes, which is CMS's primary goal for the system. For example, the ratings were not designed to compare nursing homes nationally, limiting the ability of the rating system to help consumers who live near state borders or have multistate options. In addition, the Five-Star System does not include consumer satisfaction survey information, leaving consumers to make nursing home decisions without this important information. As a result, CMS cannot ensure that the Five-Star System fully meets its primary goal. GAO is making four recommendations, including, that CMS establish a process to evaluate and prioritize website improvements, add information to the Five-Star System that allows homes to be compared nationally, and evaluate the feasibility of adding consumer satisfaction data. HHS agreed with three of GAO's recommendations, but did not agree to add national comparison information. GAO maintains this is important information, as discussed in the report.
Passenger rail systems provided 10.7 billion passenger trips in the United States in 2008. The nation’s passenger rail systems include all services designed to transport customers on local and regional routes, such as heavy rail, commuter rail, and light rail services. Heavy rail systems–– subway systems like New York City’s transit system and Washington, D.C.’s Metro––typically operate on fixed rail lines within a metropolitan area and have the capacity for a heavy volume of traffic. Commuter rail systems typically operate on railroad tracks and provide regional service (e.g., between a central city and adjacent suburbs). Light rail systems are typically characterized by lightweight passenger rail cars that operate on track that is not separated from vehicular traffic for much of the way. All types of passenger rail systems in the United States are typically owned and operated by public sector entities, such as state and regional transportation authorities. Amtrak, which provided more than 27 million passenger trips in fiscal year 2009, operates the nation’s primary intercity passenger rail and serves more than 500 stations in 46 states and the District of Columbia. Amtrak operates a more than 22,000 mile network, primarily over leased freight railroad tracks. In addition to leased tracks, Amtrak owns about 650 miles of track, primarily on the “Northeast Corridor” between Boston and Washington D.C., which carries about two-thirds of Amtrak’s total ridership. Stations are owned by Amtrak, freight carriers, municipalities, and private entities. Amtrak also operates commuter rail services in certain jurisdictions on behalf of state and regional transportation authorities. Figure 1 identifies the geographic location of passenger rail systems and Amtrak within the United States as of January 1, 2010. Passenger rail operators that we spoke to and that attended our expert panel indicated that rail stations in the United States generally fall into one of three categories: Heavy rail station. These stations are generally heavily traveled—serving thousands of passengers during rush hours—and are located in major metropolitan areas. They are usually space constrained and located either underground or on an elevated platform and serviced by heavy rail. Entry to the stations is usually controlled by turnstiles and other chokepoints. Many of the subway stations in New York City and elevated stations in Chicago are examples of these types of stations. See figure 2 for an example of a typical heavy rail station. Large intermodal station. These stations are also heavily traveled and service multiple types of rail including heavy rail, commuter rail, and intercity passenger rail (such as Amtrak). These stations are usually not as space constrained and access is usually restricted either by turnstiles or naturally occurring chokepoints, such as escalators or doorways leading to rail platforms. Examples of these types of stations include Union Station in Washington, D.C. See figure 3 for an example of a typical large intermodal station. Commuter or light rail station. These stations are open and access is generally not constrained by turnstiles and other chokepoints. These stations are usually served by commuter rail systems in suburban or rural areas outside of a metropolitan area or in the case of light rail may be located physically on the city’s streets with no access barriers between the city and the station stop. The stations are easily accessible, not usually space constrained, and are often located outdoors. Examples of this type of station include Virginia Railway Express commuter stations in suburban Virginia and the Maryland Area Regional Commuter (MARC) stations in Maryland. See figure 4 for an example of a commuter or light rail station. To date, U.S. passenger rail systems have not been attacked by terrorists. However, according to DHS, terrorists’ effective use of IEDs in rail attacks elsewhere in the world suggests that IEDs pose the greatest threat to U.S. rail systems. Rail systems in the United States have also received heightened attention as several alleged terrorists’ plots have been uncovered, including multiple plots against systems in the New York City area. Worldwide, passenger rail systems have been the frequent target of terrorist attacks. According to the Worldwide Incidents Tracking System maintained by the National Counter Terrorism Center, from January 2004 through July 2008 there were 530 terrorist attacks worldwide against passenger rail targets, resulting in more than 2,000 deaths and more than 9,000 injuries. Terrorist attacks include a 2007 attack on a passenger train in India (68 fatalities and more than 13 injuries); 2005 attack on London’s underground rail and bus systems (52 fatalities and more than 700 injuries); and 2004 attack on commuter rail trains in Madrid, Spain (191 fatalities and more than 1,800 injuries). More recently, in January 2008, Spanish authorities arrested 14 suspected terrorists who were allegedly connected to a plot to conduct terrorist attacks in Spain, Portugal, Germany, and the United Kingdom, including an attack on the Barcelona metro. The most common means of attack against passenger rail targets has been through the use of IEDs, including attacks delivered by suicide bombers. According to passenger rail operators, the openness of passenger rail systems can leave them vulnerable to terrorist attack. Further, other characteristics of passenger rail systems––high ridership, expensive infrastructure, economic importance, and location in large metropolitan areas or tourist destinations––make them attractive targets for terrorists because of the potential for mass casualties, economic damage, and disruption. Moreover, these characteristics make passenger rail systems difficult to secure. In addition, the multiple access points along extended routes make the costs of securing each location prohibitive. Balancing the potential economic impacts of security enhancements with the benefits of such measures is a difficult challenge. Securing the nation’s passenger rail systems is a shared responsibility requiring coordinated action on the part of federal, state, and local governments; the private sector; and passengers who ride these systems. Since the September 11, 2001, terrorist attacks, the role of the federal government in securing the nation’s transportation systems has evolved. In response to attacks, Congress passed the Aviation and Transportation Security Act (ATSA), which created TSA within DOT and conferred to the agency broad responsibility for overseeing the security of all modes of transportation, including passenger rail. Congress passed the Homeland Security Act of 2002, which established DHS, transferred TSA from DOT to DHS, and assigned DHS responsibility for protecting the nation from terrorism, including securing the nation’s transportation systems. TSA is supported in its efforts to secure passenger rail by other DHS entities such as the National Protection and Programs Directorate (NPPD) and Federal Emergency Management Administration’s (FEMA) Grant Programs Directorate and Planning and Assistance Branch. NPPD is responsible for coordinating efforts to protect the nation’s most critical assets across all 18 industry sectors, including transportation. FEMA’s Grant Programs Directorate is responsible for managing DHS grants for mass transit. FEMA’s Planning and Assistance Branch is responsible for assisting transit agencies with conducting risk assessments. While TSA is the lead federal agency for overseeing the security of all transportation modes, DOT continues to play a supporting role in securing passenger rail systems. In a 2004 Memorandum of Understanding and a 2005 annex to the Memorandum, TSA, and FTA agreed that the two agencies would coordinate their programs and services, with FTA providing technical assistance and assisting DHS with implementation of its security policies, including collaborating in developing regulations affecting transportation security. In addition to FTA, Federal Railroad Administration (FRA) also has regulatory authority over commuter rail operators and Amtrak and employs over 400 inspectors who periodically monitor the implementation of safety and security plans at these systems. FRA regulations require railroads that operate intercity or commuter passenger train service or that host the operation of that service adopt and comply with a written emergency preparedness plan approved by FRA. In August 2007, the Implementing Recommendations of the 9/11 Commission Act was signed into law, which included provisions that require TSA to take certain actions to secure passenger rail systems. Among other items, these provisions include mandates for developing and issuing reports on TSA’s strategy for securing public transportation, conducting and updating security assessments of mass transit systems, and establishing a program for conducting security exercises for rail operators. The 9/11 Commission Act includes requirements for TSA to increase the number of explosives detection canine teams and required DHS to carry out a research and development program to secure passenger rail systems. State and local governments, passenger rail operators, and private industry are also stakeholders in the nation’s passenger rail security efforts. State and local governments might own or operate portions of passenger rail systems. Consequently, the responsibility for responding to emergencies involving systems that run through their jurisdictions often falls to state and local governments. Although all levels of government are involved in passenger rail security, the primary responsibility for securing the systems rests with the passenger rail operators. These operators, which can be public or private entities, are responsible for administering and managing system activities and services, including security. Operators can directly operate the security service provided or contract for all or part of the total service. For example, the Washington Metropolitan Area Transit Authority operates its own police force. Federal stakeholders have taken actions to help secure passenger rail. For example, in November 2008, TSA published a final rule that requires passenger rail systems to appoint a security coordinator and report potential threats and significant security concerns to TSA. In addition, TSA developed the Transportation Systems-Sector Specific Plan (TS-SSP) in 2007 to document the process to be used in carrying out the national strategic priorities outlined in the National Infrastructure Protection Plan (NIPP) and the National Strategy for Transportation Security (NSTS). The TS-SSP contains supporting modal implementation plans for each transportation mode, including mass transit and passenger rail. The Mass Transit Modal Annex provides TSA’s overall strategy and goals for securing passenger rail and mass transit, and identifies specific efforts TSA is taking to strengthen security in this area. DHS also provides funding to passenger rail operators for security, including purchasing and installing security technologies, through the Transit Security Grant Program (TSGP). We reported in June 2009 that from fiscal years 2006 through 2008, DHS provided about $755 million dollars to mass transit and passenger rail operators through the TSGP to protect these systems and the public from terrorist attacks. Passenger rail operators with whom we spoke and that attended our expert panel said that they used these funds to acquire security assets including explosives detection canines, handheld explosives detectors, closed circuit television (CCTV) systems, and other security measures. Passenger rail operators have also taken actions to secure their systems. In September 2005, we reported that all 32 U.S. rail operators that we interviewed or visited had taken actions to improve the security and safety of their rail systems by, among other things, conducting customer awareness campaigns; increasing the number and visibility of security personnel; increasing the use of canine teams, employee training, passenger and baggage screening practices, and CCTV and video analytics; and strengthening rail system design and configuration. Passenger rail operators stated that security-related spending by rail operators was based in part on budgetary considerations, as well as other practices used by other rail operators that were identified through direct contact or during industry association meetings. According to the American Public Transportation Association (APTA), in 2005, 54 percent of passenger rail operators faced increasing deficits, and no operator covered expenses with fare revenue; thus, balancing operational and capital improvements with security-related investments has been an ongoing challenge for these operators. Figure 5 provides a composite of selected security practices used in the passenger rail environment. K-9 patrol unit(s) Countering the explosives threat to passenger rail is a difficult challenge as there are many types of explosives and different forms of bombs. The many different types of explosives are loosely categorized as military, commercial, and a third category called homemade explosives (HME) because they can be constructed with unsophisticated techniques from everyday materials. The military explosives include, among others, the high explosives PETN and RDX, and the plastic explosives C-4 and Semtex. The military uses these materials for a variety of purposes, such as the explosive component of land mines, shells, or warheads. They also have commercial uses such as for demolition, oil well perforation, and as the explosive filler of detonation cords. Military explosives can only be purchased domestically by legitimate buyers through explosives distributors and typically terrorists have to resort to stealing or smuggling to acquire them. RDX was used in the Mumbai passenger rail bombings of July 2006. PETN was used by Richard Reid, the “shoe bomber” in his 2001 attempt to blow up an aircraft over the Atlantic Ocean, and was also a component involved in the attempted bombing incident on board Northwest Airline Flight 253 over Detroit on Christmas Day 2009. Commercial explosives, with the exception of black and smokeless powders, also can only be purchased domestically by legitimate buyers through explosives distributors. These are often used in construction or mining activities and include, among others, trinitrotoluene (TNT), ammonium nitrate and aluminum powder, ammonium nitrate and fuel oil (ANFO), black powder, dynamite, nitroglycerin, smokeless powder, and urea nitrate. Dynamite was likely used in the 2004 Madrid train station bombings, as well as the Sandy Springs, Georgia abortion clinic bombing in January, 1997. ANFO was the explosive used in the Oklahoma City, Oklahoma bombings in 1995. The common commercial and military explosives contain various forms of nitrogen. The presence of nitrogen is often exploited by detection technologies some of which look specifically for nitrogen (nitro or nitrate groups) in determining if a threat object is an explosive. HMEs, on the other hand, can be created using household equipment and ingredients readily available at common stores and do not necessarily contain the familiar components of conventional explosives. On February 22, 2010, Najibullah Zazi pleaded guilty to, among other things, planning to use TATP to attack the New York City subway system. Also, HMEs using TATP and concentrated hydrogen peroxide, for example, were used in the July 2005 London railway bombing. TATP can be synthesized from hydrogen peroxide, a strong acid such as sulfuric acid, and acetone, a chemical available in hardware stores and found in nail polish remover, and HMTD can be synthesized from hydrogen peroxide, a weak acid such as citric acid, and hexamine solid fuel tablets such as those used to fuel some types of camp stoves and that can be purchased in many outdoor recreational stores. ANFO is sometimes misrepresented as a homemade explosive since both of its constituent parts—ammonium nitrate, a fertilizer, and fuel oil—are commonly available. When used, for example, in terrorist bombings, explosives are only one component of an IED. Explosive systems are typically composed of a control system, a detonator, a booster, and a main charge. The control system is usually more mechanical or electrical in nature. The detonator usually contains a small quantity of a primary or extremely sensitive explosive. The booster and main charges are usually secondary explosives which will not detonate without a strong shock, for example from a detonator. IEDs will also have some type of packaging or, in the case of suicide bombers, some type of harness or belt to attach the IED to the body. Often, an IED will also contain packs of metal—such as nails, bolts, or screws—or nonmetallic material which are intended to act as shrapnel or fragmentation, increasing the IED’s lethality. The various components of an IED—and not just the explosive itself—can also be the object of detection. The initiation hardware, which may be composed of wires, switches, and batteries, sets off the primary charge in the detonator which, in turn, provides the shock necessary to detonate the main charge. The primary charge and the main charge are often different types and categories of explosives. For example, in the attempted shoe bombing incident in 2001, the detonator was a common fuse and paper-wrapped TATP, while PETN was the main charge. While in the past the initiation hardware of many IEDs contained power supplies, switches, and detonators, certain of the newer HMEs do not require an electrical detonator but can be initiated by an open flame. Several different types of explosives detection technologies could be applied to help secure passenger rail, although operational constraints of rail exist that would be important considerations. For example, handheld, desktop, and kit explosives detection systems are portable and already in use in the passenger rail environment. Carry-on item explosives detection technologies are mature and can be effective in detecting some explosive devices. Explosive Trace Portals generally use the same underlying technology as handheld and desktop systems, and have been deployed in aviation with limited success. Advanced Imaging Technology (AIT) portals are becoming available but, as with trace portals, will likely have only limited applicability in passenger rail. Standoff detection technologies promise a detection capability without impeding the flow of passengers, but have several limitations. Canines are currently used in passenger rail systems, generally accepted by the public, and effective at detecting many types of explosives. Limitations in these technologies restrict their more widespread or more effective use in passenger rail and include limited screening throughput and mobility, potential issues with environmental conditions, and the openness and physical space restrictions of many rail stations. In the passenger rail environment detection of explosives involves the screening of people and their carry-on baggage. The different types of explosives detection technology available to address these screening needs can be divided into two basic categories. There are those based on imaging methods, sometimes called bulk detection, and those that are based on trace detection methods. The goal in bulk detection is to identify any suspicious indication—an anomaly—in a bag or on a person that might potentially be a bomb. These systems, while they may be used to detect explosive material, are also often used to detect other parts of a bomb. Although some automated detection assistance is usually included, imaging based detection systems currently depend heavily on trained operators in identifying the anomalies indicative of a bomb. Trace detection technologies, on the other hand, involve taking a physical sample from a likely source and then analyzing it with any one of several different techniques for the presence of trace particles of explosive material. Importantly, a positive detection does not necessarily indicate the presence of a bomb because the trace particles may just be contamination from someone having handled or having been near explosives material. Explosives trace detection systems can often identify the individual type of explosives trace particles present. Bulk and trace detection technology generally serve different functions and can sometimes be paired to provide a more complete screening of a person and their belongings. Typically that screening occurs in two stages. First, an initial screening is done to separate suspicious persons or carry- on baggage from the rest of the passenger flow quickly. In almost all cases, any anomalies detected in initial screening will trigger the need for a person or baggage to undergo a secondary inspection, via different methods, and typically aside from the main screening flow to confirm or dismiss the anomaly as a threat. Technology need not be used in either inspection stage. For example, behavioral assessment is sometimes used to provide an initial screening. In addition, secondary inspection can be a physical pat-down of a person or hand inspection of carry-on baggage although explosives detection technology can also be used. Screening can be done on 100 percent of passengers or on a subset of passengers chosen at random or by some selection method. Different types of bulk and trace explosives detection technology have been developed over the years to handle both the screening of people and the screening of carry-on baggage. Generally, equipment falls into certain typical configurations—handheld, desktop, kit-based systems, carry-on baggage inspection systems, explosive trace portals, AIT portals, standoff detection systems, and explosives detection canines. Certain equipment has been designed for the screening of people, some for the screening of carry-on baggage, and some equipment can be used for both. (See figure 6.) To be effective, equipment in each of these configurations is generally evaluated across several different technical characteristics. The first important technical characteristic of an explosives detection system is how good it is at detecting a threat. Several different parameters are considered to fully express a system’s ability to detect a threat. They are used to express how often the system gets the detection right, and how often—and in which ways—it gets the detection wrong. The system can get the detection right when it alarms in the presence of a threat and the percentage of times it does under a given set of conditions is called the probability of detection. However, other important parameters measure the percentage of times the system gets the detection wrong. This can occur in two ways. First, the system can alarm even though a threat is not present. This is called a false positive and the percentage of times it occurs in a given number of trials is called the false positive rate. It is also called the false alarm rate or probability of false alarm. Second, the system can fail to alarm even though a threat is present. This is called a false negative and the percentage of times it occurs in a given number of trials is called the false negative rate. A second key technical characteristic for explosives detection systems is screening throughput, which is a measure of how fast a person or item can be processed through the system before the system is ready to accept another person or item. Screening throughput is an important characteristic to know because it directly impacts passenger delay, an important consideration when using technology in passenger rail. The higher the throughput, the less delay is imposed on passenger flow. Other important technical characteristics to consider when assessing applicability of explosives detection systems for use in passenger rail are the system’s size and weight, which will impact its mobility, the physical space needed to operate the system, and the system’s susceptibility to harsh environmental conditions. Understanding the system’s cost is also important. Handheld, desktop, and kit explosives detection systems are portable systems that are designed to detect traces of explosive particles. They have been shown to detect many explosive substances and are already used in passenger rail environments today, generally in support of secondary screening or in a confirmatory role when the presence of explosives or their trace particles are suspected. In a typical usage with handheld and desktop systems, a sample of trace particles is collected by wiping a surface with a swab or other collection device designed for use with the system. The sample is transferred into the system and typically heated to vaporize the trace particles, which are then drawn into the detector where they are analyzed for the presence of substances indicative of explosives. The results of sample analysis are typically displayed on a readout screen. Handheld and desktop systems encompass a variety of detection techniques to analyze the sample and determine if it contains particles of explosive compounds. The various underlying techniques include ion mobility spectrometry (IMS), amplifying fluorescent polymer (AFP), chemiluminescence, and colorimetric. Many handheld and desktop systems are generally based on IMS technology, a mature and well- understood method of chemical analysis. This technique consists of ionizing the sample vapors and then measuring the mobility of the ions as they drift in an electric field. Each sample ion possesses a unique mobility—based on its mass, size, and shape—which allows for its identification. The AFP technique utilizes compounds that fluoresce when exposed to ultraviolet light. However, the fluorescence intensity decreases in the presence of vapors of certain nitrogen-containing explosives, such as TNT. Detection methods based on this principle look for a decrease in intensity that is indicative of specific explosives. AFP has been shown to have a high level of sensitivity to TNT. The chemiluminiscence principle is based on the detection of light emissions coming from nitro groups that are found in many conventional military and commercial explosives such as TNT, RDX, PETN, black powder, and smokeless powder. However, chemiluminiscence by itself cannot identify any specific explosives because these nitro compounds are present not only in a number of commercial and military explosives, but also in many nonexplosive substances such as fertilizers and some perfumes. Therefore, this technique is often used in conjunction with other techniques, such as gas chromatography, to positively identify specific explosives. Kit-based explosives detection systems generally use colorimetric techniques. In this method, the detection is based on the fact that a specific compound, when treated by an appropriate color reagent, produces a color that is characteristic of this compound. The sample is taken by swiping the target object, typically with a paper, and then the colorimetric reagents are applied by spraying or dropping them on the paper. The operator deposits chemical reagents in a series and observes color changes with each reagent added. This process of adding reagents is stopped when a visible color change is observed by the operator. The operator decides whether there are any trace explosives present by visually matching the color change observed to a standardized sheet of colors. Table 1 describes some of the trace explosives detection methods described above. In comparative studies over the last 8 years, the Naval Explosive Ordinance Disposal Technology Division showed that IMS-based handheld and desktop systems are capable of detecting many conventional military and commercial explosives that are nitrogen-based, such as TNT, PETN, and RDX. Non-IMS based techniques such as amplifying fluorescent polymer and chemiluminescence based techniques are able to additionally detect ANFO, smokeless powder, and urea nitrate. However, a report sponsored by DOD’s Technical Support Working Group shows that most of these systems had difficulty in detecting certain other types of explosives. Preliminary results from an ongoing comparative study of kit-based detection systems sponsored by the Transportation Security Laboratory have shown that these systems can detect the presence of nitrogen when there is sufficient quantity of explosive sample (in small-bulk or visible amounts) available for analysis. For example, kit-based systems were able to correctly identify the presence of nitrogen in a variety of different threat materials. Additionally, kit-based systems have been shown to be susceptible to false alarms when challenged with substances such as soaps and perfumes, among others. The open and often dirty air environment of passenger rail presents certain operational issues for trace detection. However, durable versions of handheld and desktop detectors are starting to appear for use in the open and rugged field environment. This is meant to improve the instruments’ reliability, availability, and performance in an environment that has varying degrees of temperature, pressure, and humidity. In 2008 and 2009, both the Technical Support Working Group and the Joint Improvised Explosive Device Defeat Organization sponsored evaluations of commercial ‘hardened mobile’ trace detectors, during which these systems demonstrated the capability to detect certain types of explosives in an open environment over a range of external temperature, pressure, and humidity conditions. A survey by the Transportation Security Laboratory in 2009 showed a large number of manufacturers of handheld, desktop, and portable kit-based devices available on the commercial market. Although costs are a consideration—for example, in addition to initial costs, there are routine maintenance costs and the cost of consumables such as the swabs used for sampling—for determining whether to make future deployments of handheld, desktop, and kit explosives detection systems, these technologies are already being used in the passenger rail environment and are expected to continue to play a role there. Carry-on baggage explosive detection systems are based on x-ray imaging, a technology that has been in use for more than a century. Screening systems incorporating the technology have been used in commercial aviation for more than 30 years, in part, because they serve a dual purpose; images are analyzed for guns and other weapons at the same time they are analyzed for the presence of materials that may be explosives. Because these images do not uniquely identify explosive materials, secondary screening is required to positively identify the materials as explosives. Single-energy x-ray systems are useful for detecting some bomb components. They are, however, not as useful for the detection of explosive material itself. Advanced techniques add multiple views, dual x- ray energies, backscatter, and computed tomography (CT) features (see Table 2) to provide the screener with additional information to help identify IEDs. Systems with one or more advanced techniques, multiple views; dual energies, and backscatter, but not CT, are called advanced technology (AT) systems to distinguish them from CT. AT systems enable more accurate identification of explosives without the additional expense of CT. Further, the additional information can be used to automatically detect explosive materials. Carry-on baggage explosive detection technology used in commercial aviation is a mature technology. The Transportation Security Laboratory has qualified several different models of carry-on baggage explosive detection systems manufactured by several vendors for use in commercial aviation. Many of these systems are in use every day at airports in the United States. Carry-on baggage explosive detection systems are effective in detecting IEDs that use conventional explosives when screeners interpret the images as was demonstrated in a Transportation Security Laboratory air cargo screening experiment where five different models of currently fielded AT baggage explosives detection systems were used to screen all eight categories of TSA-defined cargo. In addition, DHS Science and Technology (S&T) Directorate provided another comparison of screener performance to automatic detection performance in a 2006 pilot program at the Exchange Place Station in the Port Authority Trans-Hudson (PATH) heavy rail system. Phase I of this pilot evaluated the effectiveness of off-the-shelf explosives detection capabilities that were adapted from current airport checkpoint screening technologies and procedures. The carry-on baggage explosive detection equipment was operated in the automated threat detection mode to minimize passenger delay. System effectiveness was tested by the use of a red team, an adversary team that attempted to circumvent the security measures. While the results were highly sensitive and not discussed in the pilot program report, the false alarm rate was found to be low. Carry-on baggage explosive detection technologies have operational issues that limit their usefulness in passenger rail security. These systems are used in checkpoints and their acceptability will depend upon the tolerance for passenger delay. At checkpoints, 100 percent screening is possible up to the throughput capacity of the screening equipment; beyond that rate, additional screening equipment and personnel or selective (less than 100 percent) screening is required. During S&T’s screening in the PATH system passenger rail pilot, a maximum single system throughput of 400 bags per hour was measured with carry-on baggage explosive detection systems operating in automatic explosive detection mode at threat levels appropriate to passenger rail, as described above. The 400 bags per hour single system throughput had a corresponding passenger throughput of 2336 passengers per hour. With this throughput, the pilot was able to perform 100 percent screening of large bags and computer bags (see below) during the peak rush hour using two carry-on baggage explosive detection systems. Another closely related challenge associated with checkpoint screening is passenger delay. The S&T pilot in the PATH system measured median passenger delays of 17 seconds and 47.5 seconds respectively depending on whether or not a passenger’s bags set off automated explosive detection alarms. These delays can be compared to the 13 second median time for an unscreened passenger to walk through the screening area. The longer delay, when bags set off alarms, was caused by secondary screening required to confirm or deny the presence of explosives. Maximum passenger throughput was achieved when screening only bags large enough and heavy enough to contain sufficient explosives to damage passenger rail infrastructure. When 100 percent screening exceeded the capacity of the system, the pilot used queue-based selection to maximize throughput. In queue-based selection, a traffic director selects passengers for screening as long as there is room in the queue for the screening process. Using this procedure, the pilot was able to accommodate PATH’s desire to keep queue lengths below five passengers. Acquisition costs range from $25,000 to $50,000 for AT systems to more than $500,000 for CT systems. The primary operating cost is manpower. Operating manpower typically includes a traffic director (someone to select passengers for screening , direct passengers to the carry-on baggage explosive detection system, and provide instructions as required), a secondary screener, and a maintenance person. Structures would be needed to protect existing carry-on baggage explosive detection systems from the challenging passenger rail environments, which include outdoor stations that are exposed to dust and precipitation. This is because typical carry-on baggage explosive detection systems have hazardous parts that are not protected from foreign objects up to 1 inch in diameter and have no protection from water intrusion. Explosive trace portals (ETP) are used in screening for access to buildings and, to a limited extent, airport checkpoint screening. The operation of these systems generally involves a screener directing an individual to the ETP and the ETP sensing his presence and, when ready, instructing the individual to enter. The portal then blows short puffs of air onto the individual being screened to help displace particles and attempts to collect these particles with a vacuum system. The particle sample is then preconcentrated and fed into the detector for analysis. The results are displayed to the operator as either positive or negative for the detection of explosives. Positive results can display the detected explosives and trigger an audible alarm. Currently tested and deployed ETPs use IMS analytical techniques for chemical analysis to detect traces of explosives, similar to those used for handheld and desktop detectors. These techniques are relatively mature but the operation of IMS-based ETPs in an open air environment, such as that of passenger rail, is subject to interference from ambient agents, such as moisture and contaminants, that can impact a detector’s performance by interfering with its internal analysis process resulting in false readings. Regardless of the detection technique used, sampling is a major issue for trace detection. Generally, factors such as the explosives’ vapor pressure and packaging, as well as how much contamination is present on an individual from handling the explosive, affect the amount of material available for sampling. Particular to trace portals, factors such as the systems’ puffer jets and timing, clothing, the location of explosive contamination on the body, and human variability impact the effectiveness of sampling. For example, if the puffer jets produce too little pressure, they have little impact in improving the trace explosive signal, while too much pressure results in trace explosive particles becoming lost in a large volume of air that is difficult to sample effectively. In addition, clothing material and layering can reduce the available trace explosive signal. The location of the explosive trace on the body also impacts the amount of trace explosives that the system will collect. In laboratory testing of ETPs in 2004, the Naval Explosive Ordnance Disposal Technology Division tested three ETP systems’ basic ability to detect trace amounts of certain explosives within the required detection threshold when deposited on the systems’ collection sites. While the systems consistently detected some of these explosives, they were unable to detect others. In addition, during laboratory testing on systems from three manufacturers performed by the Naval Explosive Ordnance Disposal Technology Division in 2004 and the Transportation Security Laboratory from 2004 through 2007, the systems did not meet current Naval Explosive Ordnance Disposal Technology Division or TSA requirements. In 10 laboratory and airport pilot tests of ETPs from three manufacturers from 2004 through 2005, the Naval Explosive Ordnance Disposal Technology Division and TSA also measured the systems’ throughput. In laboratory testing, the average throughput without alarms ranged from 2.56 to 5 people per minute. During pilot testing in airports, the operational mean throughput, which included alarms, ranged from 0.3 to 1.4 people per minute and the operational mean screening time ranged from 15.4 seconds to 22.2 seconds. Although, they may have some applicability for checkpoint screening in lower volume rail environments that require passengers to queue up, the throughput and screening time of ETPs make them impractical to use for 100 percent screening in high volume rail stations. An ETP system using a different analytical technique, mass spectrometry (MS), for chemical analysis has the potential of significantly improving the ability to distinguish explosives from environmental contaminants, although its use in a portal configuration has not been tested in the rail environment. DHS has, however, performed laboratory testing of two versions of an MS-based ETP. Other operational issues may limit their applicability in the rail environment. GAO found that during the pilot testing in airports, for example, the systems did not meet TSA’s reliability requirements due to environmental conditions. This resulted in higher than expected maintenance costs and lower than expected operational readiness time. ETPs may have some applicability for checkpoint screening in lower volume rail environments that require passengers to queue up such as Amtrak, but the low throughput and long screening time of ETPs make them impractical to use for 100 percent screening in high volume rail stations. In addition, the large size and weight of ETPs make them difficult to transport and deploy in stations with limited space and also impractical for use in any random way. Advanced Imaging Technology (AIT) portals are used for screening people for building access and, to an increasing extent, airport access. The operation of these systems generally involves the individual undergoing screening entering the AIT portal and raise his hands above his head. The AIT portal then takes images of the individual, which are displayed to another officer who inspects the images. The inspecting officer views the image to determine if there are threats present. If a threat is detected, the individual must go through further inspection to determine if the he or she is carrying explosives. Currently deployed AIT portals in the aviation environment use either millimeter wave or backscatter x-ray techniques to generate an image of a person through their clothing. While both systems generate images of similar quality, millimeter wave has the advantage that it does not produce ionizing radiation. Although, according to one manufacturer, its backscatter x-ray system meets all applicable federal regulations and standards for public exposure to ionizing radiation, systems that don’t use ionizing radiation will likely raise fewer concerns. An issue of particular concern to the public with AIT portals is privacy, due to the ability of the systems to image underneath clothing (see figure 7). In order to protect passengers’ privacy, TSA policy for these systems specifies that the officer directing passengers into the system never sees the images. In addition, some systems offer privacy algorithms that can be configured to blur out the face and other areas of the body or present the image as a chalk outline. Efforts are currently underway to develop algorithms to automate the detection of threat objects, which has the potential to increase privacy if it eliminates the need for a human to inspect the images. In testing done prior to October 2009, TSA tested AIT portals from two vendors—one using millimeter wave and the other backscatter x-ray— against detection, safety, throughput, and availability requirements for airport checkpoint screening. Both systems met these requirements. In addition, in 2006, TSA pilot tested an AIT portal in the rail environment to determine the usefulness and maturity of these systems. In 2007 and 2008, the Transportation Security Laboratory tested the performance of AIT systems in a laboratory environment for DHS S&T. TSA also began an operational evaluation of AIT systems in airports in 2007, which, due to privacy concerns, includes the use of privacy algorithms. Laboratory testing included a comparison of the performance of AIT systems against enhanced metal detectors and pat-downs; determining the detection effectiveness of the systems for different body concealment locations and threat types, including liquids, metallic and nonmetallic weapons, and explosives; and measuring the systems’ throughput. The detailed results of this testing are classified so will not be outlined in this technology assessment. However, generally, the testing showed that there are a number of factors that affect the performance of AIT systems, including the individual inspecting the images for potential threats, the use and settings of privacy algorithms, and other factors. For example, the detection performance varied by screener. In addition, the use of privacy algorithms generally impacts the decision time for screeners, and has other operational considerations. The throughput of one of the AIT systems was measured to be 40 people per hour, which was significantly lower than the S&T requirement of 60 people per hour. As with ETPs, AIT portals may have some applicability for checkpoint screening in lower volume rail environments, but the low throughput, long screening time, and other factors make them impractical to use for 100 percent screening in high volume rail stations. Another operational issue that may limit their applicability in the rail environment is their large size and weight that makes them difficult to transport and deploy in stations with limited space. Standoff explosives detection systems are primarily differentiated from other types of explosives detection devices by the significant physical separation of detection equipment from the person or target being scanned. Several different technologies have been incorporated into standoff explosives detection systems, but those suitable for use today in a public setting such as passenger rail are passive or active imaging systems using typically either the millimeter wave or terahertz (THz) portion of the electromagnetic spectrum. Radiation in these portions of the spectrum are naturally emitted or reflected from everyday objects, including the human body, and have the added feature that clothing is often transparent to them. Therefore, they can be used to safely screen people for hidden threat objects. Systems available on the market today claim to detect person-borne objects across a range of distances. In several laboratory and field studies since 2006 looking at passive standoff imaging systems, organizations including Naval Explosive Ordnance Disposal Technology Division, Transportation Security Laboratory, S&T, and TSA have demonstrated the technology’s basic ability, under the right conditions, to detect hidden person-borne threat objects. Because the detection technique relies on a temperature differential between the warmer human body and the colder threat object next to it and not on the metallic content of the object, it also has the potential to detect non-metallic threats. This capability gives these standoff imaging systems a distinct advantage over walk-through metal detectors—the conventional person screening tool—which can only detect objects with sufficient metallic content. DHS has also evaluated several standoff detection systems in operational rail environments. For example, as part of Phase II of the 2006 Rail Security Pilot looking at advanced imaging technologies, S&T found that such systems, in general, had some ability to detect threat objects indicative of suicide bombs on passengers and, overall, were developing into potentially useful technologies for passenger rail. Follow-on tests in 2007 and 2009 conducted by TSA at operational passenger rail or other mass transit locations provided further support for the technologies potential in addressing the screening needs of these systems. In the July 2009 pilot, for instance, screening throughput for a passive millimeter wave system was tested by TSA during rush hour at the PATH Exchange Place subway station in New Jersey, a key entry point for commuters entering lower Manhattan. Two systems were used with each positioned 8 to 10 meters from a group of passenger turnstiles which provided a chokepoint for commuters entering the station. At several periods during rush hour, the systems demonstrated the ability to scan at or near 100 percent screening—in one case, more than 900 people per hour—without disrupting the flow of passengers. Those pilots also demonstrated another attractive feature of these systems important for their use in passenger rail; they can be built to be relatively portable. For the PATH pilot, TSA broke down, moved and re-configured multiple standoff devices four times a day. The ability for screening systems to be deployed and easily re-deployed to another location encourages their use for random deployment, a recommended protective measure for mass transit systems. In addition, this allows rail operators a way to provide screening to a much wider percentage of their system with fewer units than it would if they had to use fixed systems, which might prove cost prohibitive for the larger rail systems. While promising, several factors limit the more widespread use of current standoff detection technologies to just detection of objects carried on a person’s body. They cannot provide a complete screening of a passenger and their belongings. They could, however, be used in tandem with other technologies or methods to handle accompanying articles. Another limiting factor of current standoff technologies is the inability to discriminate between a potential threat object and a real one. Because the current state of the technology is based on imaging alone, explosives material identification is generally not possible. Use of radiation in the weaker, nonionizing millimeter wave and THz bands is attractive because it presents no danger to humans, but it also means that there is not enough information in the energy received by the sensor to more positively identify the threat as explosives material, as is routinely done, for example, by the higher energy CT systems used to screen checked baggage in aviation. Therefore, secondary screening will often be needed to completely resolve an alarm. In a standoff configuration, this raises logistical and manpower issues. At a minimum, for example, since the system is operating at a distance and passengers are not queuing up, it is not obvious how a person showing up as a potential threat could be easily intercepted and directed out from the normal flow of passengers. In addition, although recent TSA testing in 2009 on an advanced standoff system showed good performance detecting hidden threat objects— including nonmetallic objects—on moving people in controlled situations, consistent detection under actual operating conditions in heavy passenger volume scenarios will be challenging. The TSA tests showed good probability of detection rates and low false alarm rates for indoors and outdoors screening. Unlike the use of similar technology in a portal configuration (such as AIT) where a passenger can be asked to pause, turn around, or, for example, lift their arms to provide the sensor a better view, in a standoff configuration passenger, movement is uncontrolled. Although some systems allow tracking, the length of time a person can be maintained within the required line of sight is minimal in a fast-moving, large density crowd. Finally, at up to several hundred thousand dollars per unit, a deployment of standoff technology in passenger rail could be costly and manpower intensive. Based on their operational pilots over the last several years, TSA told us that a likely implementation for a standoff detection system at a rail site would consist of multiple detectors, and a 3 to 4 person team including one operator per system, an assistant, and probably two Behavioral Detection Officers to focus special attention on persons of interest. A good implementation would also have a canine team ready to inspect the passenger or accompanying articles, if the system detected an anomaly. Also, since some of the systems produce images susceptible to the same privacy concerns as the recent deployment of AIT in airports, a remote imaging station might also need to be configured and staffed. Explosives detection canines (EDC) are currently used in passenger rail systems for both random screening of passengers and their belongings and as a deterrent to criminal and terrorist activity. EDCs are considered a mature technology and are being used by all of the passenger rail operators with whom we spoke or that attended our expert panel. These operators also viewed canines as the most effective method currently available for detecting explosives in the rail environment because of their detection capability as well as the deterrent effect that they provide. More specifically, operators noted EDCs’ ability to rapidly move to various locations throughout a rail system, their minimum impact on passenger flow and rail operations, and their ability to detect explosives they are trained to detect. Operators and experts on our panel also noted that canines are generally accepted by members of the public that use these systems. In addition to passenger rail operators, canines have been deployed by federal agencies such as the U.S. Secret Service; Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF); and U.S. Customs and Border Protection. While the use of canines is mature, both the government, through DHS S&T, as well as academia, are conducting ongoing research on the limits of canine detection. While the mechanism of how canines detect explosives through their sense of smell is not well understood, there are several certification programs to validate the canines’ ability to detect explosives, which include specifying standards for explosives detection. These standards vary based on which entity is certifying the canine. A guiding document on the training of canines is the Scientific Working Group on Dog and Orthogonal Detectors Guidelines that specifies recommended best practices for canine explosives detection. These standards call for an EDC to detect explosives a certain percent of the time and a probability of false alarms less than a certain rate. Certifying entities, however, may have more stringent standards. For example, ATF requires that its canines detect all explosives that are presented to them, and have limited false alarms in its tests. TSA requires that their certified canines find a specified percent of explosives in a variety of scenarios, such as onboard an aircraft, mass transit rail, and mass transit buses. Homeland Security Presidential Directive-19 tasks the Attorney General, in coordination with DHS and other agencies, with assessing the effectiveness of, and, as necessary, making recommendations for improving federal government training and education initiatives related to explosive attack detection, including canine training and performance standards. According to ATF officials, TSA, in coordination with ATF, is developing standards for EDCs, which are nearly complete and are similar to the standards that ATF uses. EDCs have a limited period of endurance at which they can maintain effective detection capabilities. According to ATF officials and other experts that attended our panel, canines can typically operate between 20 and 45 minutes before requiring a break with a total of 3 to 4 hours of time spent detecting per day. Additionally, members of our expert panel told us that aspects of the rail environment such as dirt, cleaning chemicals, and metal fragments from trains, may reduce canines’ optimum operating time in this environment. As a result, one rail operator told us that their EDCs are stored in the back of police cars throughout the day unless they are needed and are not available for use as a deterrent. TSA advocates using explosive detection canines on patrols as visible deterrents in an effort to reduce crime and prevent the introduction of explosives into the rail environment. Canines have a history of being trained to detect items and in recent years have been trained to detect, among other things, explosives, fire accelerants used in arson investigations, and drugs. While training methods differ among canine training schools, these methods typically train canines by rewarding them for locating certain items. Rewards include toys, a food treat, or the canine’s food itself. In turn, these canines are trained to alert their handlers if they detect an item of interest, usually by sitting down next to the item. EDCs used in rail are generally deployed to screen passenger baggage, either on a primary basis by inspecting baggage as passengers enter a system or on a secondary basis to screen an item of interest, such as an unattended package. Additionally, EDCs are to receive training on a regular basis to ensure that they are capable of detecting explosives. Recurrent training requirements vary based on the training method used with the canine. For instance, one training regime we reviewed calls for 4 hours per week of recurrent training for EDCs, while other training regimes, such as those used by ATF, require daily training. The amount of recurrent training necessary for EDCs has not been determined according to the experts we spoke with, but they agree that the training is necessary to ensure the canine accurately detects explosives. As such, passenger rail operators who employ EDCs are to incorporate the training regime specified by the training method used to produce the EDC to ensure the canine operates effectively. Additionally, TSA and ATF both require their trained EDCs to be recertified on an annual basis whereby the canine and handler must demonstrate that they can detect explosives and meet required performance standards. The quality of an EDC’s search for explosives is dependent on the handler correctly interpreting behavioral changes of the canine. As the canine is capable of giving a positive or negative response as to the presence of an explosive odor emanating from an item, the handler must interpret the canine’s response and respond appropriately in keeping with a pre- determined concept of operations because the canine cannot indicate the type of explosive it has detected. Moreover, according to ATF officials, a canine is only capable of detecting the explosives it has been trained to detect and there are tens of thousands of explosive compounds. To address this issue, ATF separates explosives into six categories with similar characteristics that the canines are trained and required to identify. According to TSA, the total initial cost to acquire and train an EDC and handler is about $31,000. In addition, there are also ongoing maintenance costs including food, veterinary services, and other maintenance expenses, as well as the ongoing expense of the handler’s salary. TSGP grant funding can often be used to offset the initial acquisition cost of the canine, but cannot typically be used to pay for ongoing maintenance throughout the canines’ duty life. According to ATF officials, an EDC typically has an operational life of about 7 years, having completed training around age 2 and entering retirement at age 9. Vapor Wake Canines are an emerging use of EDCs that may be applicable to the passenger rail environment. Vapor Wake Canines differ from more traditional EDCs in that the canine does not directly sniff individual passengers and their belongings and instead the canine may remain in a stationary location sniffing multiple passengers as they pass by the canine, thus allowing more passengers and their belongings to be screened. These canines are trained to alert if they detect any explosives in the air and follow the explosive to its source. Vapor Wake Canines were piloted by DHS S&T in 2006 in the Metropolitan Atlanta Rapid Transit Authority with generally positive results. Specifically, these canines were able to detect explosives under the concept of operations developed by DHS S&T. DHS S&T officials told us that they will soon begin additional research on Vapor Wake Canines to determine their probability of detection and to better understand factors behind their performance. The ability of explosives detection technologies to help protect the passenger rail environment depends both upon their detection performance and how effectively the technologies can be deployed in that environment. Detection performance varies across the different technologies with more established technologies such as handheld, desktop, kit-based trace detection systems, x-ray imaging systems, and canines having demonstrated good performance against many conventional explosives threats while newer technologies such as ETPs, AIT, and standoff detection systems are in various stages of maturity. However, all of the technologies face key challenges, and most will struggle in passenger rail stations to screen passengers without undue delays. Important characteristics of the technologies such as screening throughput, mobility, and durability, as well as physical space constraints in rail stations may limit deployment options for explosives detection technologies in passenger rail. Certain explosives detection technologies have demonstrated good detection performance against conventional explosives. Explosives detection canines, for example, are certified by several organizations as being able to detect a wide variety of conventional explosives for which they have been trained. In addition, some of the analytical trace detection methods are mature laboratory techniques that—within their individual design constraints—have been shown to be capable of consistent detection of many conventional explosives and their components when used in handheld, desktop, and kit-based systems. In many cases, this is because they have been designed specifically to focus on specific characteristics of nitro-based conventional explosives. Similarly, the more mature bulk detection techniques—carry-on baggage x-ray systems, for example—have been widely used for many years and, when used by trained operators, have shown good detection performance. However, some of the newer detection technologies—ETPs, AIT, and standoff detection systems, for example—are in varying stages of maturity and more extensive testing would be required to determine their likely performance if deployed in passenger rail. For example, ETPs performed poorly in laboratory testing even though those devices incorporated mature analytical detection techniques. In this case, the variation in performance might be the result of how those techniques are integrated by specific manufacturers into a portal configuration. AIT is currently being deployed in airports nationwide, and laboratory testing has shown it has some ability to detect explosives. While standoff detection systems have demonstrated good performance detecting hidden threat objects on people in controlled testing, consistent detection under actual operating conditions in heavy passenger volume scenarios will be challenging. With all the technologies, certain factors underlie their ability to achieve adequate performance and often these depend on the human operator. For example, in a trace detection system the human operator plays a key part in preparing the sample and delivering it to the trace detection machine. In addition, trace detection is an indirect method of detection, relying on the presence of trace signatures that may, in fact, not exist or exist in insufficient quantities to be detected even though the threat object is present, or are present in the absence of a threat object. Similarly, image based detection schemes are all dependent on successful image interpretation. Human operator image interpretation is a difficult task and performance is largely a function of adequate and persistent training. To help address this issue, DHS has initiated efforts looking at enhancing automated image processing algorithms to provide for better detection and lower false alarm rates. As part of this, DHS is creating a database of raw image data from commercially available systems—for example, x-ray and millimeter wave image data—which can be made available to researchers to help them develop better automated detection algorithms to improve processing across a range of imaging technologies including carry-on baggage x-ray technologies such as AT-based systems, AIT, and some of the standoff detection technologies. With the goal of increasing the probability of detection and reducing the number of false alarms these systems generate when operating in automated mode, such enhancements could help with the challenge of screening large volumes of people by increasing system throughput. While an outgrowth of research and development to support aviation security, this could benefit the use of imaging technologies in passenger rail settings as well. Finally, adequate detection performance of explosives detection technologies can depend on other factors, such as maintenance, system calibration, and proper setup. For example, performance can be affected by the operator’s preferences regarding sensitivity of the equipment. With many of the technologies there are tradeoffs that can be made between the sensitivity of the device and the operator’s tolerance for false alarms. In cases where a trace detector is highly sensitive to contaminants in the air, for instance, decreasing the sensitivity may reduce the number of false alarms but will also increase the possibility for missed detections. One of the issues in implementing explosives detection technologies effectively in passenger rail is in identifying the explosive materials and amounts that constitute the threat to that environment. While requirements and standards for explosives threat amounts and detection levels, for example, have been defined for the aviation environment and for DOD’s counter IED mission, threat amounts have not been determined for rail for either the conventional explosives threat or the threat from HMEs. As a result, in general, detection performance has been measured against threats levels defined for other environments. Because passenger volumes and timeliness expectations vary across the different rail systems including heavy rail and commuter or light rail, different methods of selecting and screening passengers are possible. Although passenger volumes in the heavier trafficked rail stations may preclude 100 percent screening of passengers in an overly intrusive way, lighter volume stations may allow for such intrusive screening if an adequate screening throughput speed can be maintained. Decisions regarding screening modes will vary by systems, stations, and the tolerance for passenger delay. Two important system characteristics when considering the use of explosives detection technologies in passenger rail are screening throughput and system mobility. The higher the throughput, the less delay is imposed on passenger flow. The more portable a detection system is, the more it lends itself for use in random deployment, a known deterrent and cost effective option for rail operators. Screening throughput and system mobility varied across the different explosives detection technologies we examined, but many had screening times that would be difficult to accommodate in situations with heavy passenger volume. In airport security checkpoints, for example, using similar equipment and working toward a goal of 10 minute or less wait times, the TSA staffing allocation model for screening operations requires individual screening lanes to be able to process 200 passengers per hour. However, during the 2006 S&T pilot testing in PATH, passenger flow rates on the order of 4,000 passengers per hour was measured during the afternoon rush at just the main entrance turnstiles at one station. Even under TSA’s aviation wait time goal this would require the purchase, staffing, and physical space for 20 screening lanes. These technologies, however, might be considered for use in lower volume rail stations, for example, or in other areas of passenger rail where passenger queues could be supported without unduly impacting passenger flow. However, they are generally large, bulky and not easily moved from place to place and therefore impractical for use in any highly mobile way. In general, most passenger rail operators that have deployed explosives detection technologies have done so on a less intrusive basis, using, for example, mobile explosives detection canine teams as a deterrent in stations or, alternatively, setting up temporary, portable stations for the screening of selected passengers who are pulled out of the normal passenger flow randomly, via some selection method, or as a result of behavioral cues. In this mode, for example, they have used handheld detectors for primary screening. Standoff detection systems, which minimize the impact of screening on passenger flow, are the only explosives detection technology that currently could be considered for helping to address the 100 percent screening scenario at heavy volume stations, generally, for passenger rail. As noted, some of these systems demonstrated the ability to scan at or near 100 percent of passengers even in heavy rail stations for periods of time. In addition, many are portable and are designed so that system installations could be shifted from site to site. However, while attractive from a throughput point of view, standoff systems are developing in terms of their detection performance and general concept of operations. In addition to limitations imposed by the technologies, rail stations themselves have constraints that will influence the applicability of certain technology for certain purposes. These include environmental issues, such as the relatively high level of contaminants found in passenger rail environments like steel dust and soot that can disrupt the operation of sensitive equipment, and raise the potential for false alarms, and the lack of controlled temperature and humidity levels in many stations and the potential for extremes of those levels in outdoor stations. Some DOD research and development efforts are looking at hardened versions of some explosives detection technologies. The general openness of many rail stations is another important consideration in deciding on the use of explosives detection technologies in rail. In commuter or light rail systems, for example, many stations may be unmanned, outdoor platforms without barriers between pubic areas and the train and with few natural locations to place technologies to be able to screen passengers. With limited existing chokepoints, implementation of certain technologies may require station infrastructure modifications to aid in funneling passengers for screening. Finally, physical space constraints in many stations are an important consideration. For example, many rail stations have limited space in which to install large equipment, accommodate any passenger queues that might build up, or add multiple screening lanes as a way of dealing with long lines. Further, while standoff detection technologies are more able to deal with heavy passenger volumes and do not necessarily have a large physical footprint, they do require several to tens of meters of open, line of sight spacing between sensor and passengers for effective operation. In addition to how well technologies work in detecting explosives and their applicability in the passenger rail environment, there are several overarching operational and policy considerations impacting the role that these technologies can play in securing the passenger rail environment, such as who is paying for them and what to do when they apparently detect explosives. Even if a technology works in the passenger rail environment, our work, in consultation with rail experts, identified several critical operational and policy factors that arise when these technologies are being considered for deployment. Specifically, 1) the roles and responsibilities of multiple federal and local stakeholders could impact how explosives detection technologies are funded and implemented in passenger rail; 2) implementation of technology or any security investment could be undertaken in accordance with risk management principles, to ensure limited security funding is allocated to those areas at greatest risk; 3) explosives detection technologies are one component of a layered approach to security, where multiple security measures combine to form the overall security environment; 4) a well-defined and designed concept of operations for the use of these technologies is important to ensure that they work effectively in the rail environment; and 5) cost and potential legal implications are important policy considerations when determining whether and how to use these technologies. Although there is a shared responsibility for securing the passenger rail environment, the federal government and rail operators have differing roles, which could complicate decisions to fund and implement technologies. More specifically, while passenger rail operators are responsible for the day to day security measures in their stations, including funding them, they utilize federal grant funding to supplement their security budgets. While federal grant funding for security has increased in recent years, decision making for funding these measures, including technology, is likely to continue to be shared between the rail operators and the federal government moving forward. In addition, as federal agencies implement their own rail security measures and operations, which could include the use of explosives detection technology, decisions of how to implement and coordinate these measures will likely be shared with operators. Regarding the federal role, TSA defines and implements federal policies and actions for securing passenger rail systems in their role as the lead federal agency responsible for transportation security. TSA’s strategy for securing passenger rail is identified in the Mass Transit Modal Annex to the Transportation Systems- Sector Specific Plan, including its role in developing and procuring technologies for securing rail systems. To date, TSA’s primary approach to securing passenger rail, defined in the Modal Annex, has been to assess the risk facing rail systems, develop security guidance for rail operators, and to provide funding to operators to make security improvements to their systems, including the purchase of security technologies. Specifically, TSA’s stated objectives for using technology in passenger rail is to bolster the use of technologies to screen passengers and their bags on a random basis in partnership with rail operators. According to the Modal Annex, these objectives are to be achieved through the use of explosives detection technology to screen passengers during TSA Visible Intermodal Prevention and Response (VIPR) operations and screening programs introduced by passenger rail operators themselves. In addition, through its National Explosives Detection Canine Team Program (NEDCTP), TSA procures, trains, and certifies explosives detection canine teams and provides training and the canines to passenger rail operators. TSA also supports the use of technology by providing funding to rail operators to purchase screening technologies and train their employees through TSGP. To date, TSGP has provided funding for various security- related technologies; including handheld explosive trace detection equipment, closed-circuit television, intrusion detection devices, and others. In June 2009, we reported that the TSGP faces a number of challenges, such as lack of clear roles and responsibilities in the program and delays in approving projects and making funds available to operators, and as of February 2009, of the $755 million that had been awarded by TSGP for fiscal years 2006 through 2008, approximately $334 million had been made available to transit agencies, and transit agencies had spent about $21 million. We further reported that these delays were caused largely by TSA’s lengthy cooperative agreement process with transit agencies, a backlog in required environmental reviews, and delays in receiving disbursement approvals from FEMA. As such, rail operators have spent a small percentage of the resources available to fund security investments. We recommended that DHS establish and communicate to rail operators time frames for releasing funds after the projects receive approval from TSA. DHS agreed with this recommendation and indicated that it would establish and communicate timeframes for releasing funds to TSGP grantees and try to release funds shortly after they have received all required documentation from grant recipients. Additionally, in a March 2010 report, the administration’s Surface Transportation Security Priority Assessment recommended that TSA adopt a multi-year, multi-phase approach for grant funding based on a long-term strategy for transportation security. This approach calls for segmenting larger projects into smaller components to both complete the projects quicker and also to provide strategic planning for future grant funding needs and provide closer alignment of federal and stakeholder long-term priorities. Moreover, during our expert panel, rail operators stated that they would prefer the federal government to procure and provide security technologies to them, instead of providing cash awards to directly procure the technologies by the operators. These operators indicated that their local procurement regulations can often make the process of procuring security technologies slow and cumbersome. In addition to providing funding for technology, the Modal Annex also identifies TSA’s role in providing resources for research, development, testing, and evaluation of technology. TSA, like other DHS components, is responsible for articulating the technology needs of all transportation sector stakeholders—including passenger rail operators—to DHS S&T for development. Although TSA and DHS have worked to develop some security technologies specific to passenger rail systems, technologies that it has pursued could work across different transportation modes, including aviation, maritime, mass transit, and passenger rail. TSA officials told us that they look for opportunities to take advantage of technologies in transportation modes other than those for which they were originally developed. However, the TSA officials indicated that certain characteristics of passenger rail may not allow the deployment of technologies developed for other modes such as aviation. In addition to its work with S&T, TSA has commissioned its own research efforts, including pilot programs designed to test existing explosives detection equipment in the rail environment and the use of standoff technologies in the passenger rail environment. Additionally, the administration recommended in its March 2010 report that TSA, DHS S&T, and other agencies directly involve rail operators in setting surface transportation research and development priorities. TSA also provides technological information to rail operators through the Public Transit Portal of the Homeland Security Information Network (HSIN) and maintains a Qualified Products List (QPL) of technologies that have been qualified for use in aviation. As we reported in June 2009, the information on HSIN is in an early state of development and contains limited information that would be useful to rail operators. For example, for a given security technology, TSA’s list of technologies provides a categorical definition (such as video motion analysis), a subcategory (such as day or night camera), and the names of products within those categories. We also reported that the list on HSIN neither provides nor indicates how rail operators can obtain information beyond the product’s name and function and does not provide information on the product’s capabilities, maintenance, ease of use, and suitability in a rail environment. We recommended that TSA explore the feasibility of expanding the security technology information in HSIN, including adding information on cost, maintenance, and other information to support passenger rail agencies’ purchases and deployment of these technologies. TSA concurred with this recommendation and stated that it would provide information on HSIN about specifications, performance criteria, and evaluations of security technologies used in or adaptable to the passenger rail environment. In January 2010, TSA officials told us that they were still planning to provide this information on the HSIN some time in 2010, but had not done so yet. TSA officials told us that in addition to the QPL for aviation there is another list that is administered by FEMA called the Authorized Equipment List, which provides a list of technologies for which TSGP grant recipients can use grant funding. According to TSA officials, the Authorized Equipment List is available on HSIN and there is one explosives detection technology on the list—a handheld explosive trace detector. Passenger rail operators that attended our expert panel stated that they would like TSA to pursue research more directly related to rail and provide additional information on which technologies are best for use in rail, including a list of “approved” or recommended technologies. TSA officials told us that they are currently developing minimum standards for technologies for modes of transportation other than aviation, but did not provide a time frame for completing this effort. Once these standards are developed they envision adding categories for other modes of transportation-—such as rail-—to the QPL. Additionally, the administration’s Surface Transportation Security Priority Assessment report from this year recommended that TSA along with DHS S&T establish a fee-based, centrally managed “clearing house” to validate new privately developed security technologies that meet federal standards. In contrast to the federal role, passenger rail operators and local government stakeholders are responsible for the day-to-day security of rail systems, including the purchase, installation, and operation of any explosives detection technologies. As such, operators consider their own unique security and operational needs when deciding whether and to what extent to use these technologies. While the operators have responsibility for securing their systems, the operators that attended our panel expressed to us that their limited resources often limit their ability to directly invest in security, including technology, and instead they look to the federal government to provide financial assistance. For example, rail operators that we spoke to and that attended our expert panel noted that they often do not collect sufficient revenue from their fares to cover operational expenses. In June 2009, we reported that while the majority of rail operator actions to secure passenger rail have been taken on a voluntary basis, the pending 9/11 Commission Act regulations outline a new approach that sets forth mandatory requirements, such as, among others, requirements for employee training, vulnerability assessments, and security plans, the implementation of which may create challenges for TSA and industry stakeholders. In general, TSA has a collaborative approach in encouraging passenger rail systems to voluntarily participate and address security gaps. We also reported that with TSA’s pending issuance of regulations required by the 9/11 Commission Act, TSA will fundamentally shift this approach, and establish new regulatory requirements for passenger rail security. TSA officials stated that they do not see the 9/11 Commission Act requirements impacting TSA’s current role as it relates to technologies in the passenger rail environments. Because of the unique characteristics of the rail environment and the fact that the 9/11 Commission Act does not impose specific requirements related to technologies, TSA officials stated that the agency’s role will continue to be to assist rail operators in conducting random deployments of explosives detection technologies and inspections, as stated in the Modal Annex. As passenger rail operators consider the use of explosives detection technologies, it is not only important to select technologies capable of detecting explosives and that can be used in the passenger rail environment, but it is also important to select technologies that will address identified risks. We have recommended that a risk management approach be used to guide the investment of security funding, particularly for passenger rail systems, where security funding and rail operator budgets are limited. As such, the decision as to whether or not to deploy explosives detection technologies should be made consistent with a risk management framework to ensure that limited security budgets are expended to address the greatest risks. We reported in June 2009 that officials from 26 of 30 transit and passenger rail systems we visited stated that they had conducted their own assessments of their systems, including risk assessments. Additionally, Amtrak officials stated that they conducted a risk assessment of all of their systems. As part of the assessment, Amtrak contracted with a private consulting firm to provide a scientific basis for identifying critical points at stations that might be vulnerable to IED attacks or that are structurally weak. We also reported that other transit agencies indicated that they have received assistance in the form of either guidance or risk assessments from federal and industry stakeholders. For example, FTA provided on-site technical assistance to the nation’s 50 largest transit agencies (i.e., those transit agencies with the highest ridership) on how to conduct threat and vulnerability assessments, among other technical assistance needs, through its Security and Emergency Management Technical Assistance Program (SEMTAP). According to FTA officials, although FTA continues providing technical assistance to transit agencies, the on-site SEMTAP program concluded in July 2006. Furthermore, FTA officials stated that on-site technical assistance was transferred to TSA when TSA became the lead agency on security matters for passenger rail. In addition, multiple federal agencies recommend the use of risk based principles in assessing risk and making investment decisions. DHS’s National Infrastructure Protection Plan states that implementing protective programs based on risk assessment and prioritization enables DHS, sector-specific agencies, and other security partners to enhance current critical infrastructure and key resources protection programs and develop new programs where they will offer the greatest benefit. Further, TSA’s Modal Annex advocates using risk-based principles to secure passenger rail systems and we have previously reported that TSA has used various threat, vulnerability, and consequence assessments to inform its security strategy for passenger rail. In June 2009, we reported that TSA had not completed a risk assessment of the entire passenger rail system and recommended that, by doing so, TSA would be able to better prioritize risks as well as more confidently assure that its programs are directed toward the highest priority risks. TSA concurred with this recommendation and stated that it is developing a Transportation Systems Security Risk Assessment that aims to provide TSA with a comprehensive risk assessment for use in passenger rail. To this end, TSA told us that it has developed a Transportation Systems Sector Risk Assessment report, which is to evaluate threat, vulnerability, and consequence in more than 200 terrorist attack scenarios on passenger rail. Moreover, TSA also indicated that they are developing and fielding a risk assessment capability focused on individual passenger rail agencies. This effort includes, among other things, a Baseline Assessment for Security Enhancement for rail operators, a Mass Transit Risk Assessment, and an Under Water Tunnel Assessment. Rail operators with whom we spoke or who attended our expert panel noted the importance of using risk management practices to allocate limited resources. TSA’s Modal Annex calls for a flexible, layered, and unpredictable approach to securing passenger rail, while maintaining an efficient flow of passengers and encouraging the expanded use of the nations’ rail systems. Expanding the use of explosives detection technology is one of the layers of security identified by the Modal Annex. When considering whether to fund or implement explosives detection technologies, it will be important for policymakers to consider how explosives detection technology would complement other layers of security, the impacts on other layers of security, and the security benefits that would be achieved. For example, one rail operator who attended our expert panel told us that they used deployments of explosives detection technologies along with customer awareness campaigns and CCTV as layers of security in their security posture. In addition to explosives detection technology, other layers of security that rail operators have used or are considering using to secure passenger rail include: Customer awareness campaigns. Rail operators use signage and announcements to encourage riders to alert train staff if they observe suspicious packages, persons, or behavior. We have previously reported that of the 32 rail operators we interviewed, 30 had implemented a customer awareness program or made enhancements to an existing program. Increased number and visibility of security personnel. Of the 32 rail operators we previously interviewed, 23 had increased the number of security personnel they utilized since September 11, 2001, to provide security throughout their system or had taken steps to increase the visibility of their security personnel. Further, these operators stated that increasing the visibility of security is as important as increasing the number of personnel. For example, several U.S. rail operators we spoke with had instituted policies such as requiring their security staff, wearing brightly colored vests, to patrol trains or stations more frequently, so they are more visible to customers and potential terrorists or criminals. These policies make it easier for customers to contact security personnel in an emergency or potential emergency. Employee training. All 32 of the rail operators we previously interviewed had provided security training to their staff, which largely consisted of ways to identify suspicious items and persons and how to respond to events. CCTV and video analytics. As we previously reported, 29 of 32 U.S. rail operators had implemented some form of CCTV to monitor their stations, yards, or trains. Some rail operators have installed “smart” cameras which make use of video analytics to alert security personnel when suspicious activity occurs, such as if a passenger left a bag in a certain location or if a person entered a restricted area. According to one passenger rail operator we spoke with, this technology was relatively inexpensive and not difficult to implement. Several other operators stated they were interested in exploring this technology. Rail system design and configuration. In an effort to reduce vulnerabilities to terrorist attack and increase overall security, passenger rail operators are incorporating security features into the design of new and existing rail infrastructure, primarily rail stations. For example, of the 32 rail operators we previously interviewed, 22 of them had removed their conventional trash bins entirely, or replaced them with transparent or bomb-resistant trash bins. Of 32 rail operators we previously interviewed, 22 had stated they were incorporating security into the design of new or existing rail infrastructure. In deploying explosives detection technologies, it is important to develop a concept of operations (CONOPS) for both using these technologies to screen passengers and their belongings and for responding to identified threats. This CONOPS for passenger rail would include specific plans to respond to threats without unacceptable impacts on the flow of passengers through the system. There are multiple components of a CONOPS. First, operators identify likely threats to rail systems and choose layers of security to mitigate these threats. Since each rail system in the United States faces different risks, rail systems perform their own risk assessment in consultation with federal partners to identify their risks. Using the results of the risk assessment, each system crafts a strategy to respond to the threat and to mitigate the risks by acquiring different layers of security. Rail systems typically make use of multiple security layers— which may or may not include the use of an explosives detection technology component—based on the risks each system faces. The CONOPS is a plan to respond to threats identified by one of the layers of security. Developing a CONOPS for responding to explosives detection technology is challenging because of the potential for false alarms. For example, two rail operators with whom we spoke and that were using explosives detection technologies to screen passengers and their belongings stated that a CONOPS was critical for ensuring that actions taken in response to an alarm are appropriate and are followed correctly. For example, should the person be questioned or searched further or should the person be moved to another location on the chance that the threat is real. These are questions that would be answered in developing a CONOPS and before implementing explosives detection technology in the passenger rail environment. Two of the rail operators and one of the experts that attended our panel also expressed concern about the potential for false alarms when using explosives detection technologies and the potential impacts on rail operations. For example, operators were concerned about a false alarm stopping service. As a result, it is important to carefully consider the CONOPS of using a particular technology, such as how to respond to false alarms, in addition to the security benefits before implementation. For instance, one major rail operator’s CONOPS involves using handheld explosives detection technology to screen passengers’ baggage randomly by a law enforcement officer. The frequency in which bags are selected is determined in advance by someone other than the law enforcement officer—such as a supervisor— based on a number of factors such as the number of passengers entering a station and resources available for screening. The baggage is then screened by the officer with the explosives detection equipment; if there is no alarm, the passenger is free to continue. Should the bag alarm, the officer then questions the passenger to determine the source of the alarm and, if necessary, takes action to respond to a threat. Cost is an important consideration for rail system security investments, as all operators have limited resources to devote to security. For example, all of the rail operators that we spoke with and that attended our expert panel expressed the view that obtaining funds for security priorities is challenging. Nearly all domestic rail systems operate at a deficit in which their revenues from operations do not cover their total cost of operations. An official from the industry association representing passenger rail and mass transit systems that attended our expert panel stated that when it comes to security investments, security often becomes less of a priority than operational investments as they often operate with budgets deficits. In addition, another rail operator that attended our expert panel raised concern that TSGP often will not provide funding for ongoing maintenance of capital purchases, additional staff needed to deploy these technologies, and disposable items required to operate the technology, such as swabs for explosive trace detection devices. For example, while rail operators can use TSGP grant funds to purchase explosives detection equipment, funding for the operation and maintenance of this technology is only provided for a 36 month period. One major rail operator that attended our expert panel stated that the cost of deploying a random baggage check with a handheld explosive trace detector costs between $700 and $1,000 per hour, including the costs of staffs’ salaries and disposable items. Given the cost of operating and maintaining these security technologies, it would be important for policymakers to consider all associated costs of these technologies before implementing new security measures or encouraging their use. Legal implications with regard to constitutional and tort law would also be important for passenger rail operators to consider when determining whether and how explosives detection technologies are applied in the passenger rail environment. The Fourth Amendment of the U.S. Constitution protects individuals against unreasonable governmental searches, and state constitutional law may provide additional protections against searches. In recent years, federal courts have heard several challenges to new passenger inspection programs implemented in passenger rail environments. In these cases, in order to assess the constitutionality of the programs, the courts considered factors such as the intrusiveness of the searches, the government interest in the program, and the effectiveness of the program. In addition to constitutional concerns, taking actions to mitigate potential tort liability is another important consideration for rail operators. For example, state law may allow individuals to bring tort claims against transit agencies, such as claims related to invasion of privacy and health hazards posed by scanning equipment. Also, operators using explosives detection canines should be conscious of potential claims related to dog bites. There are also privacy considerations associated with subjecting passengers to certain types of screening technologies. Because explosives detection technologies generally do not collect personally identifiable information, they pose fewer privacy concerns than other screening techniques may. However, a number of advocacy groups have raised concerns about the use of AITs which produce an image of a person without clothing. To protect passengers’ privacy, however, ways have been introduced to blur the passengers’ images with privacy settings. Concerns also exist about the impact that certain technologies could have on the health of passengers. For example, certain types of explosives detection screening equipment may expose individuals to mild radiation. Specifically, technologies such as backscatter x-ray AIT expose the passenger to minute amounts of radiation. While this radiation exposure is smaller than the radiation a person receives by a normal medical x-ray, the public may have concerns about being exposed to any radiation or may misjudge the amount of radiation they receive. For example, according to TSA, a person would require more than 1,000 backscatter scans in a year to reach the effective dose equal to one standard chest x-ray. Additionally, some forms of IMS technology make use of radiation in their operation and some people may be concerned with having any radiation source in a rail network. Finally, some passenger rail systems operate across multiple city, county, and other jurisdictions and must coordinate with local governments and law enforcement across these areas. For example, the Washington Metropolitan Area Transit Authority was established by an interstate compact between Maryland, Virginia, and the District of Columbia. The authority has its own police force and must coordinate with not only the police force of the District of Columbia, but also the surrounding communities through which its trains pass. This pattern is common across the country where public transportation systems cross state and local boundaries. As such, the use of explosives detection equipment throughout these networks involves coordination across many levels of government and may potentially invoke the laws of multiple jurisdictions and come under the scrutiny of different governments. Securing passenger rail systems is a daunting challenge for several reasons, including the open nature of these systems and the relative ease and the number of locations in which these systems can be accessed by those wishing to cause harm. While there are some explosives detection technologies available or currently in development that could be used to help secure passenger rail, there are several technical, operational, and policy factors that are important to consider when determining the role that these technologies can play in passenger rail security. There are various stakeholders responsible for securing passenger rail systems and all may need to be involved when making decisions to fund, implement, and operate explosives detection technologies. It is also important that the need for explosives detection technologies be based on a consideration of the risks posed by the threat of an explosives attack on passenger rail systems. Such a risk assessment would help define the detection needs, including what explosives materials need to be detected and in what quantities. Explosives detection technologies are just one of many layers of security and cannot, by themselves, secure passenger rail systems. While explosives detection technologies can play a role in securing passenger rail systems, certain aspects of these technologies will likely limit their immediate use. All of the technologies face key challenges, including the ability to screen passengers without undue delays. In some cases, the ability to detect more conventional explosives is also limited. The ability of these technologies to effectively detect explosives on people and their belongings, as well as the expectations of the public for openness and speed when using rail, will likely be key drivers in decisions about which technologies should be applied, and in what capacity. Other important characteristics of the technologies, including the mobility, durability, and the size of the equipment, may limit deployment options for explosives detection technologies in passenger rail. The ability of these technologies to effectively detect explosives often depends on a human operator and the development of a strong concept of operations that defines the processes used to screen passengers and their belongings and the roles that people and technology play in that process will be critical. When considering the options for securing passenger rail, it is important that policymakers also take into account the cost and legal implications of securing systems that are so open and widely used by the public. The lack of funding from passenger rail operator budgets means that the purchase and maintenance of explosives detection technologies would likely originate from or be highly subsidized by the federal government. Moreover, the wide scale use and reliance on these systems by the public means that individuals and advocacy groups may raise concerns about any technology that screens passengers or their belongings. An effective risk management process that continuously examines the risks posed by explosives to the passenger rail environment and considers the various technical, operational, and policy considerations when determining alternative solutions to address the explosives risk should result in an effective identification of the role that explosives detection technologies can play in securing passenger rail. We provided draft copies of this report to the Secretaries of Homeland Security, Defense, Transportation, Justice, and Energy for review and comment. DHS’s TSA and the Department of Transportation provided technical comments which we have incorporated as appropriate. The National Nuclear Security Administration of the Department of Energy agreed with our report and also provided technical comments which we incorporated, as appropriate. The Department of Defense provided technical comments which we have incorporated as appropriate. The Department of Justice stated they had no comments on the draft report. We will send copies of this report to the Secretaries of Homeland Security, Defense, Transportation, Justice, and Energy, and appropriate congressional committees. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact Nabajyoti Barkakati at (202) 512-4499 or barkakatin@gao.gov or David Maurer at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix II. To determine what explosives detection technologies are available and their ability to help secure the passenger rail environment, we met with experts and officials on explosives detection research, development, and testing, and reviewed test, evaluation, and pilot reports and other documentation from several components within the Department of Homeland Security including the Science and Technology Directorate, the Transportation Security Laboratory; the Transportation Security Administration (TSA); the Office of Bombing Prevention; and the United States Secret Service; several Department of Defense (DOD) components including the Naval Explosive Ordnance Disposal Technology Division (NAVEODTECHDIV), the Technical Support Working Group (TSWG), and the Joint Improvised Explosive Device Defeat Organization (JIEDDO); several Department of Energy (DOE) National Laboratories involved in explosives detection testing, research and development including Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), Oak Ridge National Laboratory (ORNL), and Idaho National Laboratory (INL); and the Department of Justice (DOJ) including the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF), because of its expertise in explosives detection. We also observed explosives detection canine testing at the ATF’s National Canine Training and Operations Center in Front Royal, Virginia. We also observed a TSA pilot test of a standoff explosives detection system at a rail station within the Port Authority Trans-Hudson passenger rail system (PATH). In addition, we made site visits to LANL and SNL to observe the research and development work being done and to interview experts on explosives detection technologies. We also interviewed several manufacturers of explosives detection technologies and attended an industry-wide exhibition and demonstration of explosives detection equipment products. In addition, we attended a symposium and workshop on explosives detection organized by DOD’s Combating Terrorism Technical Support Office, the 2009 DOD Explosive Detection Equipment Program Review at NAVEODTECHDIV, and an academic workshop on explosive detection at DHS’s Center of Excellence for Explosives Detection, Mitigation, and Response at the University of Rhode Island. We also interviewed government officials involved with securing passenger rail in the United Kingdom. Finally, we visited six domestic passenger rail locations that were involved in testing various types of explosives detection technologies to either observe the testing or discuss the results of these tests with operators. Table 3 is a listing of the passenger rail locations we visited. In determining which explosives detection technologies were available and able to secure the passenger rail environment, we considered tho technologies available today or deployable within 5 years, technolo which could be used to screen either passengers or their carry-on items, and technologies which were safe to use when deployed in public are determining the capabilities and limitations of explosives detection technologies we evaluated their detection and screening throughput performance, reliability, availability, cost, operational specifications, and possible use in passenger rail. We also restricted our evaluation to those technologies which have been demonstrated through tests, evaluations and operational pilots, to detect explosives when tested against performance parameters as established by government and militar of the technologies. We also obtained the views of various experts and stakeholders during a panel discussion we convened with the assistance of the National Research Council on August 11-12, 2009. Panel attendees included 23 experts and officials from academia, the federal government, domestic and foreign passenger rail industry organizations, technology manufacturers, national laboratories, and passenger rail industry stakeholders such as local law enforcement officials and do operators. During this meeti applicability of explosives detecti environment and the operational and chnologies in th implementing these te expressed during this panel are not generalizable represented summary of the current avai detection and industry views on their applicability to passenger rail. ng, we discussed the availability and chnologies for the passengon te policy im e rail environm ent. While the views ce, they did p across all fields rovide an overall To determine wha impact in determining the role of explosives detection technologies in the passenger rail environment, we reviewed documentation related to the federal strategy for securing passenger rail, including TSA’s Mass Transit Modal Annex to the Transportation Systems Sector Specific Plan, and other documentation including DHS reports summarizing explosives detection technology tests conducted in passenger rail to better understand the role and impact that these technologies have in the passenger rail environment. We reviewed relevant laws and regulations governing the security of the transportation sector as a whole and passenger rail specifically, including the Implementing Recommendations of the 9/11 Commission Act. We also reviewed our prior reports on passenger rail security and studies and reports conducted by outside organizations related to passenger rail or the use of technology to secure passenger rail, such as the National Academies, Congressional Research Service, and others to better understand the existing security measures used in passenger rail and operational and policy issues. During our interviews and expert panel mentioned above, we also discussed and identified officials’ views related to the key operational and policy issues of using explosives detection technologies to secure passenger rail. While these views are not generalizeable to all industries represented by these officials, they provided a snapshot of the key operational and policy views. t key operational and policy factors could have an During our visits to six rail operator l detection testing, we interviewed officials regarding operational and polic issues related to technology and observ selected these locations because they had completed or were currently conducting testing of the use of explosives detection technology in the rail environment and to provide the views of a cross-section of heavy rail, commuter rail, and light rail operators. While these locations and offici tire passenger rail industry, they views are not generalizeable to the en ed passenger rail operations. We als’ provided us with a general understanding of the operational and policy issues associated with using such technologies in the rail environment. In addition, we utilized information obtained and presented in our June 2009 report on passenger rail security. For that work, we conducted site visits, or interviewed security and management officials from 30 passenger rail agencies across the United States and met with officials from two reg transit authorities and Amtrak. The passenger rail operators we visited or interviewed for our June 2009 report represented 75 percent of thenation’s total passenger rail ridership based on the information we obtained from the Federal Transit Administration’s National Transit Database and the American Public Transportation Association. In addition to the contacts named above, contributors to this report include Amy Bowser, William Carrigg, Nirmal Chaudhary, Frederick K. Childers, Christopher Currie, Andrew Curry, Richard Hung, Lara Ka Leyla Kazaz, Tracey King, Robert Lowthian, and Maria Stattel.
Passenger rail systems are vital to the nation's transportation infrastructure, providing approximately 14 million passenger trips each weekday. Recent terrorist attacks on these systems around the world--such as in Moscow, Russia in 2010--highlight the vulnerability of these systems. The Department of Homeland Security's (DHS) Transportation Security Administration (TSA) is the primary federal entity responsible for securing passenger rail systems. In response to the Legislative Branch Appropriations Act for fiscal year 2008, GAO conducted a technology assessment that reviews 1) the availability of explosives detection technologies and their ability to help secure the passenger rail environment, and 2) key operational and policy factors that impact the role of explosives detection technologies in the passenger rail environment. GAO analyzed test reports on various explosives detection technologies and convened a panel of experts comprised of a broad mix of federal, technology, and passenger rail industry officials. GAO also interviewed officials from DHS and the Departments of Defense, Energy, Transportation, and Justice to discuss the effectiveness of these technologies and their applicability to passenger rail. GAO provided a draft of this report these departments for comment. Four departments provided technical comments, which we incorporated as appropriate. A variety of explosives detection technologies are available or in development that could help secure passenger rail systems. While these technologies show promise in certain environments, their potential limitations in the rail environment need to be considered and their use tailored to individual rail systems. The established technologies, such as handheld, desktop, and kitbased trace detection systems, and x-ray imaging systems, as well as canines, have demonstrated good detection capability with many conventional explosive threats and some are in use in passenger rail today. Newer technologies, such as explosive trace portals, advanced imaging technology, and standoff detection systems, while available, are in various stages of maturity and more operational experience would be required to determine their likely performance if deployed in passenger rail. When deploying any of these technologies to secure passenger rail, it is important to take into account the inherent limitations of the underlying technologies as well as other considerations such as screening throughput, mobility, and durability, and physical space limitations in stations. GAO is not making recommendations, but is raising various policy considerations. For example, in addition to how well technologies detect explosives, GAO's work, in consultation with rail and technology experts, identified several key operational and policy considerations impacting the role that these technologies can play in securing the passenger rail environment. Specifically, while there is a shared responsibility for securing the passenger rail environment, the federal government, including TSA, and passenger rail operators have differing roles, which could complicate decisions to fund and implement explosives detection technologies. For example, TSA provides guidance and some funding for passenger rail security, but rail operators themselves provide day-to-day-security of their systems. In addition, risk management principles could be used to guide decision-making related to technology and other security measures and target limited resources to those areas at greatest risk. Moreover, securing passenger rail involves multiple security measures, with explosives detection technologies just one of several components that policymakers can consider as part of the overall security environment. Furthermore, developing a concept of operations for using these technologies and responding to threats that they may identify would help balance security with the need to maintain the efficient and free flowing movement of people. A concept of operations could include a response plan for how rail employees should react to an alarm when a particular technology detects an explosive. Lastly, in determining whether and how to implement these technologies, federal agencies and rail operators will likely be confronted with challenges related to the costs and potential privacy and legal implications of using explosives detection technologies.
As originally designed, CDS are bilateral contracts that are sold over the counter and transfer credit risks from one party to another. The seller, who is offering credit protection, agrees, in return for a periodic fee, to compensate the buyer, who is purchasing it, if a specified credit event, such as default, occurs (see fig. 1). There are three standard types of CDS contracts, depending on the underlying reference entity. A single-name CDS is based on a single reference entity such as a bond, institution, or sovereign entity. A multi-name CDS references more than one corporate or sovereign entity and can be divided into those that reference at least 2 but not more that 10 entities and those that reference more than 10 entities. An index CDS is based on an index that may include 100 or more corporate entities. The contract term often ranges from 1 to 10 years, with most standard CDS contracts having a 5-year duration. Participants in the CDS market include commercial banks, broker dealers, hedge funds, asset managers, pension funds, insurance and financial guaranty firms, and corporations. CDS can provide a number of benefits, such as giving some market participants another tool to manage credit risk. They also are a way to replicate an investment in a debt instrument such as a bond. However, in 2008, as the United States and the world faced one of the worst financial crises in history, some market observers identified CDS as one of several financial products they believed had contributed to the overall tightening in the credit markets following the bankruptcy of Lehman Brothers and the near-collapse of American International Group (AIG), which was a major CDS seller. Although authoritative information about the actual size of the market is generally not available, some have estimated the amount of outstanding contracts— as measured by the notional amount of the CDS contracts—at over $50 trillion in 2008. However, more recent figures place the notional amount at around $28 trillion, in part reflecting trade compression efforts. These market events and the estimated size of the CDS market have raised concerns about the risks that CDS and similar financial products may pose to the stability of the financial system. Furthermore, questions have been raised about the current level and structure of oversight of CDS and their impact on the financial system. In the last 3 years, CDS market participants and financial regulators have been taking actions to help mitigate various risks and challenges related to CDS activities, with a particular focus on the market’s infrastructure. In the United States, federal financial oversight of CDS is limited. Banks, whose activities as CDS dealers account for a large percentage of CDS trading, are subject to safety and soundness oversight by banking regulators. Bank regulators therefore have the authority to act on their concerns about the extent to which a banking organization’s CDS trading affects the health of the bank. However, oversight of banks acting as dealers does not directly extend into the CDS product market itself. In addition, federal financial market regulators—primarily SEC and CFTC— are generally limited or restricted in their ability to oversee CDS broadly as a product because they lack statutory authority. SEC has antifraud and antimanipulation authority over CDS, but it may face challenges in enforcing this authority because of statutory restrictions on its rule- making ability. Federal financial regulators have sought to address potential systemic threats arising from CDS activities mainly through collaborative efforts with other supervisors and key market participants. While U.S. federal financial regulators do not have authority over CDS as a product, in the United Kingdom, which has a CDS market comparable in size to the U.S. market, FSA has authority over most CDS products. However, its regulatory efforts have generally been pursued in collaboration with U.S. regulators. Federal banking regulators can oversee the CDS activity of the financial institutions they supervise. These regulators’ oversight captures most CDS activity because banks act as dealers in the majority of transactions. All of the major CDS dealers are commercial banks or subsidiaries of bank or financial holding companies that are subject to regulation by U.S. or foreign holding company regulators. Also, bank regulators have some authority to review the effect of a bank’s relations with an affiliate on the health of the bank. However, bank regulators do not regulate the CDS markets. Moreover, bank regulators generally do not differentiate CDS from other types of credit derivatives in their supervision of institutions, because most credit derivatives volume is comprised of CDS. Regulators focus their oversight on institutions’ derivatives portfolios regardless of their structure. Banking regulators’ oversight of CDS activity is largely limited to activity that is deemed to pose risks to the safety and soundness of the institutions they regulate. Accordingly, federal banking regulators generally oversee dealer banks in the U.S. mainly as part of their ongoing examination programs. However, as we reported in 2008, some regulators continued to be concerned about the counterparty credit risk created when regulated financial institutions transacted with entities that were less regulated, such as hedge funds, because these activities could be a primary channel for potential systemic risk. FRS officials explained that when examiners identified an increasing use of credit derivatives at certain regulated banks, they expanded the scope of their examinations to include a review of risks arising from the banks’ trading of these products. These exams generally were broad in scope, although occasionally they focused on CDS, and assessed the products’ financial risk and the way banks monitored and managed that risk. According to officials, some of the examination findings included concerns related to management of counterparty credit risk, including collateral practices, risk management systems, models for risk identification, and governance issues. OCC officials explained that, as the prudential regulator of the large dealer banks, its on-site examiners conducted ongoing risk-focused examinations of the more complex banking activities, which could include CDS transactions. OCC targets its risk-focused examinations using risks or trends that it notices across banks. According to OCC officials, its on-site examiners monitor derivatives activity daily in the large dealer banks and look for trends and exceptions in the banks’ information to gauge risk. For example, they may examine new counterparties that have not gone through an internal counterparty review process. OCC also conducts a quarterly analysis of the derivatives market using call report data submitted by all insured U.S. commercial banks to evaluate risks from trading activities, including CDS, in the national banking system. However, this oversight does not provide a clear snapshot of potential concentrations of risk in participants outside of national banks. Similarly, FRBNY collects data from OTC derivatives dealers that participate in an FRBNY-led initiative to improve the operational infrastructure for CDS, including information on operational metrics such as confirmation backlogs and transaction volumes but not on CDS exposures. Under consolidated supervision, some subsidiaries of holding companies that engage in CDS activities may not receive the same degree of monitoring as regulated entities receive from their prudential supervisors. OCC officials explained that, while most CDS activity is conducted in banking entities because CDS trading is a permissible bank activity, some derivatives activity is conducted in nonbank subsidiaries of holding companies. OCC, like other federal bank regulators, has authority to review how a bank’s relations with an affiliate (specifically, an affiliate that is not a subsidiary of the bank) affects the health of the bank. However, OCC supervises the bank, not the affiliate. In such cases, OCC officials said that they would collaborate with FRS to examine activity in the other nonbank subsidiaries if they deemed it necessary. Similarly, even though SEC oversees broker-dealers, the agency does not regulate the CDS markets they deal in. Until September 2008, SEC provided oversight of major investment bank conglomerates at the consolidated level through its Consolidated Supervised Entity (CSE) program. According to SEC officials, investment banks generally conducted CDS transactions in subsidiaries not registered as U.S. broker- dealers, and therefore SEC did not have an ongoing on-site examination program for these entities. Rather, the CSE program monitored information aggregated at the holding company level that included the activities of these affiliates, including their CDS transactions. According to SEC, a significant part of the CSE supervision program was dedicated to monitoring and assessing market and credit risk exposures arising from trading and dealing activities. The CSE program conducted targeted exams related to three specific projects—reviews of liquidity pools, price verification of commercial real estate, and management of counterparty exposures—which SEC officials explained could include CDS activities but did not have CDS as a specific focus. Similarly, OTS is responsible for overseeing thrift holding companies through its consolidated supervision program. These entities include AIG, GE Capital Services, Morgan Stanley, and American Express Company, which are large global conglomerates with many subsidiaries. OTS does not conduct ongoing on-site examinations of all unregulated subsidiaries. OTS officials explained that the agency monitored the holding companies’ enterprisewide risk-management practices to determine how the companies identified and managed risk and supplemented this monitoring with limited on-site visits of unregulated subsidiaries as it deemed necessary. For example, when AIG’s external auditor identified internal control problems with AIG Financial Products, a nonthrift subsidiary that was active in the CDS market and ultimately identified as posing a systemic risk to the financial system because of its role in the market, OTS examined its operations. However, OTS officials told us that thrifts generally have engaged in limited CDS activities. Federal financial regulators generally supplement data from their supervised entities or other information they collect with data from sources such as the International Swaps and Derivatives Association, Inc. (ISDA), the Bank for International Settlements, the British Bankers Association, and the rating agency Fitch to compare their banks to the larger universe of market participants. More recently, information has been available to regulators from the industry’s central trade repository, the Trade Information Warehouse (TIW). Federal market regulators—SEC and CFTC— do not have authority to regulate the CDS markets directly. With respect to CDS trading, their authorities are limited or restricted. In 1999, the PWG unanimously urged Congress to adopt recommendations aimed at mitigating certain legal uncertainties related to OTC derivatives. One recommendation was to exclude from oversight certain bilateral transactions between sophisticated counterparties and eliminating impediments to clearing OTC derivatives. A CDS is this type of transaction. Congress largely adopted the PWG recommendations when it passed the Commodity Futures Modernization Act of 2000 (CFMA). As a result, the Commodity Exchange Act (CEA) was amended to exclude the OTC CDS market from the regulatory and enforcement jurisdiction of CFTC. Federal securities laws also exclude CDS from SEC oversight, although SEC retains antifraud enforcement authority. SEC’s authority over CDS activity conducted outside of a registered broker-dealer is generally limited to enforcing antifraud provisions, including prohibitions against insider trading. These provisions apply because CDS generally are considered security-based swap agreements under CFMA. However, because SEC is generally statutorily prohibited under current law from promulgating record-keeping or reporting rules regarding CDS trading in the OTC market outside of a registered broker- dealer, its ability to enforce its authority is difficult. However, in the past 3 years SEC has initiated a number of CDS-related enforcement cases for alleged violations of its antifraud prohibitions, including cases involving market manipulation, insider trading, fraudulent valuation, and financial reporting. More recently, in September 2008 SEC initiated an investigation into possible market manipulation involving CDS. In connection with the investigation, SEC announced that it would require certain hedge fund managers and other entities with CDS positions to disclose those positions to SEC and provide other information under oath. According to SEC, depending on the results the investigation may lead to more specific policy recommendations regarding CDS. SEC officials indicated that investigations of OTC CDS transactions have been far more difficult and time-consuming than those involving exchange- traded equities and options because of the prohibition on requiring recording keeping and reporting for CDS. The lack of clear and sufficient record-keeping and reporting requirements for CDS transactions has resulted in incomplete and inconsistent information being provided when requested, according to SEC officials. The officials said that this restriction had made it more difficult to investigate and take effective action against fraud and manipulation in the CDS market than in other markets SEC oversaw. In October 2008, the SEC Chairman requested that Congress remove the CFMA restrictions on SEC’s rulemaking authority with respect to CDS. The current Chairwoman has indicted that she supports removal of these restrictions as well. Federal financial regulators have sought to address potential systemic threats arising from CDS activities mainly through collaborative efforts with other supervisors and market participants. According to federal financial regulators, they address potential systemic risks by working closely with each other and international regulators to exchange information and coordinate the supervision of regulated market participants that could pose systemic risks to the financial system. Some of these collaborative forums include the PWG, the Senior Supervisors Group, the Basel Committee on Banking Supervision, the Financial Stability Forum, and the Joint Forum. However, it is unclear to what extent the activities of unregulated subsidiaries or other unregulated market participants were also being reviewed as part of these initiatives. FRS officials indicated that, in carrying out its responsibilities for conducting monetary policy and maintaining the stability of the financial system, the Federal Reserve monitored markets and concentrations of risk through data analysis and direct contact with market participants. According to FRS officials, in supervising banks and bank holding companies they focused on CDS activity as it pertained to institutional stability. FRS ensures that the appropriate infrastructure is in place so that the system can absorb “shocks.” FRS officials explained that, by ensuring that important market participants could avoid the most adverse impacts from these shocks—such as through counterparty credit risk management—systemic risk could be mitigated. Over the last several years, FRS has identified opportunities to increase the market’s resiliency to systemic shocks related to CDS—for example, by implementing a market process for settling CDS contracts, reducing the notional amounts of outstanding contracts, and improving the operational infrastructure of the CDS market in collaboration with other supervisors. For example, since September 2005 financial regulators in the U.S. and Europe have collaborated with the industry to improve the operational infrastructure of the CDS market and to improve counterparty risk management practices. However, some market participants and observers noted that the current regulatory structure did not enable any one regulator to monitor all market participants and assess potential systemic risks from CDS and other types of complex products. While U.S. regulators do not have authority over CDS as a product, in the United Kingdom, where available evidence suggests CDS volume is comparable to that in the United States, FSA has authority over most CDS products. FSA officials explained that most CDS-related regulatory efforts have been pursued in collaboration with U.S. regulators, such as the effort to improve the operational infrastructure for CDS that was led by FRBNY and the Senior Supervisors Group’s effort to enhance risk management practices. FSA officials also explained that, more recently, it had been monitoring all aspects of OTC infrastructure and industry commitments, including central clearing for CDS, credit event settlement, collateral management processes, trade compression, and position transparency. Much of this monitoring is conducted through data collected directly from regulated firms. The New York State insurance supervisor also has authority to oversee certain aspects of insurers’ OTC derivatives activities, including CDS transactions. According to the New York State Insurance Department, it has regulated the use of derivatives by insurance companies, including CDS, since the late 1990s. The Department is the primary regulator for most U.S. financial guaranty insurers (FGIs), which are also known as bond insurers. According to the Department, aside from FGIs few insurance companies buy or sell CDS because New York state law generally prohibits insurers from significantly leveraging their portfolios. Insurance companies generally use CDS for hedging credit risk and for investment purposes. According to department officials, in its role as regulator for FGIs the department ensures that insurance companies maintain consistent underwriting criteria and adequate reserves for these activities. Under New York law, insurers must file detailed disclosures about their derivatives transactions in their quarterly and annual statements. Also, prior to engaging in any derivatives activity insurers must file a derivatives use plan that documents their ability to manage derivatives transactions. According to department officials, the department has requested detailed information from FGIs and engages in ongoing dialogue with them concerning insurance contracts referencing CDS. However, if an insurance company uses subsidiaries that are not affiliated with the insurance company, oversight may be limited. For example, the superintendent of the New York State Insurance Department testified that it did not oversee the activities of AIG Financial Products because AIG Financial Products was not affiliated with the insurance companies the department regulates. Risks to financial institutions and markets from CDS include counterparty credit risk, operational risk, concentration risk, and jump-to-default risk. However, market participants suggested that the degree of risk associated with CDS varied depending on (1) the type of CDS, (2) the reference entity for the CDS, and (3) how the CDS was used. More specifically, CDS referencing ABS and CDOs, particularly those related to mortgages, were identified as posing greater risks to institutions and markets than other types of CDS. Other risks and challenges include the lack of transparency in CDS markets, the potential for manipulation related to the use of CDS as a mechanism for price discovery, and the use of CDS for speculative purposes. Regulators and market participants noted that some OTC derivatives may share similar risks. However, the degree of risk can vary substantially by product type. Equity derivatives specifically were identified as the OTC derivatives that were most similar to CDS in terms of the risks and challenges that they presented. The main risks from CDS include counterparty credit risk, operational risk, concentration risk, and jump-to-default risk. In simple terms, counterparty credit risk is the risk to each party in an OTC derivatives contract that the other party will not fulfill the obligations of the contract. In addition to potentially not receiving contractual payments, a purchaser of CDS whose counterparty fails would suddenly be left without protection and could either have to replace the CDS contract at current, higher market values or go without protection. Banks and other financial institutions that have large derivatives exposures use a variety of techniques to limit, forecast, and manage their counterparty risk, including margin and collateral posting requirements. However, regulators, market participants, and observers identified several challenges in managing CDS counterparty credit risk. First, although margin and collateral posting serve as a primary means of mitigating the risk of loss if a counterparty does not perform on its contractual obligations, calculating margin and collateral amounts can be difficult because of the challenges associated with determining the actual amount of counterparty exposure and the value of the reference asset. Specifically, it may be difficult for market participants to agree on the valuation of CDS contracts on ABS and CDOs. Second, margining practices are not standardized and vary depending on the counterparty. For example, market participants and observers suggested that institutions with high credit ratings, for which exposures were considered to pose little credit risk, were not initially required to post collateral. These firms included bond insurers and AIG Financial Products, a noninsurance subsidiary of AIG. However, when some of these institutions’ ratings were downgraded, the institutions had difficulty meeting collateral calls. Third, the CDS market lacks comprehensive requirements for managing counterparty credit risk. More specifically, the bilateral collateral and margin requirements for OTC derivatives do not take into account the counterparty credit risk that each trade imposes on the rest of the system, allowing systemically important exposures to build up without sufficient capital to mitigate associated risks. The second type of risk that I would like to discuss is operational risk. This is the risk that losses could occur from human errors or failures of systems or controls. With CDS, there are several operational steps that are required to process trades, such as trade confirmation, which were not automated until recently and thus created backlogs in the system. In a report issued in 2007, we reported that these backlogs were largely due to a decentralized paper-based system and the assignment of trades to new parties without notifying the original dealer—a process known as novation. For instance, in September 2005, some 63 percent of trade confirmations (or 97,650) of the 14 largest credit derivatives dealers had been outstanding for more than 30 days. These large backlogs of unconfirmed trades increased dealers’ operational risk, because having unconfirmed trades could allow errors to go undetected that might subsequently lead to losses and other problems. Potential problems also existed in the operational infrastructure surrounding physical settlement, novation, and valuation of CDS. The third type of risk, concentration risk, refers to the potential for loss when a financial institution establishes a large net exposure in similar types of CDS. For example, AIG presented concentration risk because it sold a significant amount of CDS protection on related reference entities without also holding offsetting positions and did not sufficiently manage this risk. This risk tends to be greater for dealers that sell CDS protection because no margin and collateral requirements exist to ensure that the selling firm will be able to meet its potential obligations. Also, the potential exposures are greater and more uncertain than the fixed premium payments of a purchaser of CDS protection. Additionally, if a market participant decides to hold a large concentrated position, it could experience significant losses if a credit event occurred for one or more reference entities. But concentration risk can create problems for market participants even without a credit event involving the reference entity. For example, a market participant may face obligations to post collateral on a large net exposure of CDS if its financial condition changes, potentially resulting in financial distress for the dealer. AIG is the most recent example of this problem. When its credit rating was downgraded, the contracts required that it post collateral, contributing to the company’s liquidity crisis. Market participants suggested that the degree of risk from concentrated net exposures was tied to the nature of the reference entity or obligation. For example, a concentrated position in CDS on mortgage-related CDOs may present more risk than CDS on a highly-rated corporation or U.S. government bonds. Further, concentration risks at one firm may also present challenges to other market participants and the financial system. According to a regulator and an observer, the lack of clear information on the net CDS exposures of market participants makes informed decisions about risk management difficult, a situation that becomes increasingly problematic when a credit event occurs. A regulator also testified that because the CDS market was interconnected, the default of one major participant increased the market and operational risks faced by more distant financial market participants and impacted their financial health. The near-collapse of AIG illustrates the risk from large exposures to CDS. Finally, jump-to-default risk, as it relates to the CDS market, is the risk that the sudden onset of a credit event for the reference entity can create an abrupt change in a firm’s CDS exposure. Such a credit event can result in large swings in the value of the CDS and the need to post large and increasing amounts of collateral and ultimately fund the settlement payment on the contact. The default of a reference entity could put capital strain on the CDS seller from increased collateral and payment obligations to settle the contract. For example, because CDS generally are not funded at initiation, a CDS seller may not have provided sufficient collateral to cover the settlement obligations. Other risks and challenges from CDS identified by market participants, observers, and regulators include a lack of transparency in the CDS market, the potential for manipulation related to the use of CDS as a price discovery mechanism, and the use of CDS for speculative purposes. According to some regulators, market participants, and observers, limited transparency or disclosure of CDS market activity may have resulted in the overestimation of risk in the market. Such a lack of transparency may have compounded market uncertainty about participants’ overall risk exposures, the concentration of exposures, and the market value of contracts. For example, as mentioned previously at least one regulator and an observer suggested that it was unclear how the bankruptcy of Lehman Brothers would affect market participants, and this uncertainty contributed to a deterioration of market confidence. More specifically, it was reported that up to $400 billion of CDS could be affected, but the Depository Trust and Clearing Corporation (DTCC) later stated that its trade registry contained $72 billion of CDS on Lehman, and this amount was reduced to about $21 billion in payments after bilateral netting. Some market participants suggested that concerns about transparency were even more prevalent with customized CDS products because the contracts were not standardized and their prices were determined using estimates rather than prices from actual transactions. Some regulators and an industry observer suggested the potential existed for market participants to manipulate these prices to profit in other markets that CDS prices might influence, such as the equity market, and that the lack of transparency could contribute to this risk. CDS price information is used by some market participants as an indicator of a company’s financial health. Market participants use spreads on CDS contracts to gauge the financial health and creditworthiness of a firm. However, two regulators and an industry observer suggested that it was unclear whether CDS prices accurately reflected creditworthiness because the market was largely unregulated and the quality of data is questionable in an opaque market. According to testimony by an SEC official in October and November 2008, the lack of transparency in the CDS market also created the potential for fraud, in part because the reporting and disclosure of trade information to the SEC was limited. More specifically, the official testified that a few CDS trades in a relatively low-volume or thin market could increase the price of the CDS, suggesting that an entity’s debt was viewed by the market as weak. Because market participants may use CDS as one of the factors in valuing equities, this type of pricing could adversely impact a reference entity’s share price. One market observer we spoke with offered the following hypothetical example: if the CDS price moves up and the equity price moves down, an investor could profit from holding a short position in the equity by buying protection in the CDS market. The SEC official testified that a mandatory system of record keeping and reporting of all CDS trades to SEC should be used to guard against the threat of misinformation and fraud by making it easier to investigate these types of allegations. However, another regulator suggested that the price discovery role was not a unique role to CDS and that exchange-traded derivatives such as foreign exchange and interest rate derivatives also served a price discovery function. Another challenge identified by regulators and market participants was the frequent use of CDS for speculative purposes, an issue that has raised some concerns among some regulators and industry observers. Some have suggested that the practice should be banned or in some way restricted. However, other regulators and market participants disagree and note that speculators in the CDS market provide liquidity to the market and facilitate hedging. Many of the concerns stem from uncovered or “naked” CDS positions, or the use of CDS for speculative purposes when a party to a CDS contract does not own the underlying reference entity or obligation. Because uncovered CDS can be used to profit from price changes, some observers view their function as speculation rather than risk transfer or risk reduction. For example, one regulatory official stated that these transactions might create risks, because speculative users of CDS have different incentives than other market participants. In addition, one regulator stated that when participants used CDS for speculative purposes, there was no direct transfer or swap of risk. Instead, the transaction creates risk from which the participant aims to profit. Market participants also noted that the risks associated with CDS did not stem from their use for speculation but from a failure to manage the risks, particularly CDS of ABS. Market participants and an observer also explained that a restriction on uncovered CDS would create a market bias in favor of protection buyers, because it is easier for them to hold a covered position. This bias could impact the liquidity of the market, because trading would be confined to those with an exposure to the referenced entity. Finally, market participants noted that firms used CDS to manage risks from many economic exposures in addition to risks such as counterparty credit exposures that arise from holding the underlying reference obligation. In addition to CDS, we also explored whether other products posed similar risks and challenges. Regulators and market participants identified a number of other OTC derivatives that presented similar risks and challenges, such as counterparty credit risk and operational risk. These OTC derivative products include interest rate, foreign exchange, and commodity derivatives. While the types of risk may be similar, the degree of risk can vary. However, equity derivatives specifically were identified as the OTC derivatives that are most similar to CDS in terms of the risks and challenges that they presented. OTC equity derivatives, such as equity swaps and options, were said to be similar to CDS because of the potential for abrupt shifts in exposure, a lack of transparency, and the ability to customize the product. Nevertheless, according to regulators and industry observers, the CDS market differs from other OTC derivatives markets because it poses greater risks due to the potential for greater increases in payment obligations and larger impacts from life-cycle events such as those associated with jump-to-default risk. Financial regulators and the industry have initiated several efforts to begin addressing some of the most important risks posed by CDS and similar products, particularly operational and counterparty credit risks. These efforts include improving the operational infrastructure of CDS markets, implementing a clearinghouse or central counterparty to clear CDS trades, and establishing a central trade registry for CDS. If implemented effectively and sustained, the recent initiatives could begin to address some of the risks related to the use of CDS. However, their effectiveness will likely be constrained by two factors. First, participation in a clearinghouse and central trade registry is generally voluntary. And second, the efforts would not include the more customized and highly structured CDS that can include CDS on complex reference entities that may pose significant risks to institutions and financial markets. A number of other reforms to the CDS market have surfaced but face challenges. These include mandatory clearing or restricting CDS trades. Finally, OTC derivatives that share some of the risks related to CDS could benefit from similar efforts to mitigate their impact. Financial regulators and market participants have recently taken steps to try to address risks posed by CDS. The efforts have focused on three main areas: (1) operational and infrastructure improvements, (2) creation of a central trade repository, and (3) development of clearinghouses to clear CDS contracts. Regulators and industry members have cooperated since 2005 on four projects to identify and address operational risks posed by CDS. In addition to managing operational risks from CDS, several of these efforts should assist participants in managing counterparty credit risks in general. First, the industry has worked to reduce the backlog of CDS processing events, including unconfirmed trades. In 2005, a joint regulatory initiative involving U.S. and foreign regulators directed major CDS dealers to reduce the backlog of unconfirmed trades and address the underlying causes of these backlogs. In response, market participants increased the use of electronic confirmation platforms. Since November 2006, most CDS trades are confirmed electronically through an automated confirmation system known as Deriv/Serv. By increasing automation and requiring endusers to obtain counterparty consent before assigning trades, dealers were able to significantly reduce the number of total confirmations outstanding. As a result of these efforts to improve trade processing, many participants view the CDS market as the most automated among OTC derivatives. Second, the industry has sought to improve novation, the process whereby a party to a CDS trade transfers, or assigns, an existing CDS obligation to a new entity. In 2005, the joint regulatory initiative suggested that the novation process had contributed to the large backlog of unconfirmed trades, because the assignment of trades to new parties often occurred without the consent of the original counterparty. In such cases, a party to a CDS contract might not be aware of the identity of its new counterparty, possibly increasing operational and counterparty credit risks. To streamline the novation process, ISDA introduced a novation protocol in 2005 that required counterparty consent before assigning a trade. However, until recently parties to the novation communicated using phone and e-mail, both of which can be inaccurate and inefficient. More recently, the industry has committed to processing all novation consents for eligible trades through electronic platforms. Third, the industry has attempted to reduce the amount of outstanding trades via “portfolio compression.” In 2008, a Federal Reserve initiative resulted in a working group of dealers and investors that collaborated with the industry trade group ISDA to pursue portfolio compression of CDS trades. The process involves terminating an existing group of similar trades and replacing them with fewer “replacement trades” that have the same risk profiles and cash flows as the initial portfolio, and thus eliminating economically redundant trades. According to FRBNY, the compression of CDS trades results in lower outstanding notional amounts and helps to reduce counterparty credit exposures and operational risk. By the end of October 2008, FRBNY reported that trade compression efforts had reduced the notional amount of outstanding CDS by more than one-third. Finally, the industry has taken steps to implement a cash settlement protocol for CDS contracts. CDS contracts traditionally used physical settlement that required a protection buyer to deliver the reference obligation in order to receive payment. Because many CDS are uncovered, the protection buyer would have to buy the underlying referenced entity to deliver, potentially causing buyers to bid up prices and limiting the profits from protection and speculation. To address this concern, ISDA developed protocols to facilitate cash settlement of CDS contracts. The cash settlement protocols rely on auctions to determine a single price for defaulted reference obligations that is then used to calculate payout amounts to be paid at settlement. This process has been used to settle CDS contracts involved in recent credit events, including Lehman Brothers, Washington Mutual, Fannie Mae, and Freddie Mac. In November 2006, DTCC created the TIW to serve as the industry’s central registry for CDS. TIW contains an electronic record of most CDS trades, and DTCC and market participants plan to increase its coverage. In addition to placing most new trades in TIW, CDS dealers and other market participants also plan to submit existing and eligible CDS trades to TIW. TIW helps to address operational risks and transparency concerns related to the CDS market. For example, according to DTCC, it helps mitigate operational risk by reducing errors in reporting, increases transparency by maintaining up-to-date contract information, promotes the accuracy of CDS-related information, and simplifies the management of credit events. TIW also facilitates operational improvements such as automated life- cycle processing by interacting with electronic platforms for derivatives trades such as Deriv/Serv. Additionally, TIW should assist regulators in monitoring and managing concentration risk from CDS. Although regulators can receive CDS-related information from their regulated entities, no regulator has the ability to receive this information from all market participants, and no single comprehensive source of data on the CDS market exists. However, a central trade repository that contains information on all CDS trades will allow regulators to monitor large positions of market participants and identify large and concentrated positions that may warrant additional attention. TIW also has helped to address some concerns about CDS market transparency by providing aggregate information on CDS trades. The information includes gross and net notional values for contracts on the top 1,000 underlying CDS single-name reference entities and all indexes and is updated weekly. Despite the important benefits provided by TIW, several factors limit its usefulness as a tool to monitor the overall market. First, TIW does not include all CDS trades, particularly those that cannot be confirmed electronically. For example, TIW cannot fully capture all customized trades, such as CDS referencing ABS and CDOs, including those related to mortgages. While DTCC officials believed that TIW includes a large portion of CDS trades, they noted that they could not be certain because the size and composition of the entire market remain unknown. Second, TIW currently has no regulatory oversight to ensure the quality of the data, and regulators lack the authority to require that all trades be included in TIW, particularly those of nonbanks. A clearinghouse can reduce risks associated with CDS, including counterparty credit risks, operational risks, and concentration risks, while also improving transparency. A clearinghouse acts as an intermediary to ensure the performance of the contracts that it clears. For CDS, market participants would continue to execute trades as bilateral OTC contracts. However, once registered with the clearinghouse the CDS trade would be separated into two contracts, with the clearinghouse serving as the counterparty in each trade. That is, the clearinghouse would have a separate contractual arrangement with both counterparties of the original CDS contract and serve as the seller to the initial buyer and the buyer to the initial seller. In this way, a clearinghouse would assume the counterparty credit risk for all of the contracts that it cleared. If a clearinghouse is well-designed and its risks are prudently managed, it can limit counterparty credit risk by absorbing counterparty defaults and preventing transmission of their impacts to other market participants. Clearinghouses are designed with various risk controls and financial resources to help ensure that they can absorb counterparty failures and other financial losses. For example, clearinghouses impose standard margin requirements and mark positions to market on a daily basis. They also have other financial safeguards that typically include capital requirements, guaranty funds, backup credit lines, and the ability to call on capital from member firms, which often are large financial institutions. A clearinghouse also can help to standardize margin and collateral requirements. It can impose more robust risk controls on market participants and assist in the reduction of CDS exposures through multilateral netting of trades. In doing so, it would facilitate the compression of market participants’ exposures across positions and similar CDS products, thereby reducing the capital needed to post margin and collateral. A clearinghouse also can help to address operational and concentration risks and improve CDS transparency. Market participants suggested that a clearinghouse would help to centralize market information and could facilitate the processing of CDS trades on electronic platforms. It can also help limit concentration risk through standardized requirements for margin collateral that may help reduce the leverage imbedded in CDS contracts and thus place limits on a firm’s ability to amass a large net exposure selling CDS. Finally, according to some regulators and prospective clearinghouses, a clearinghouse could improve CDS transparency by releasing information on open interest, end-of-day prices, and trade volumes. However, like the other options for improving the CDS market, only certain standardized trades would be cleared by a clearinghouse, and market participants would decide which trades to submit for clearing. A clearinghouse can only clear trades with a sufficient level of standardization because the more customized the contract, the greater the risk management and operational challenges associated with clearing it. Initially, the proposed clearinghouses will clear standard-index CDS and some highly traded single-name corporate CDS. Regulators and market participants suggested that risks from more complex and structured CDS would have to be addressed outside of clearinghouses. One market participant volunteered that it would not be opposed to collateral requirements for CDS that were not cleared through a clearinghouse. Further, because clearing is voluntary, it is unclear what portion of CDS will be cleared and whether this volume will be sufficient to support the clearinghouses. Regulators and market participants suggested that robust risk management practices were critical for clearinghouses because clearinghouses concentrated counterparty credit and operational risk and CDS presented unique risks. Failure to sufficiently manage these risks could threaten the stability of financial markets and major institutions if a clearinghouse were to fail. In addition, if jump-to-default risk is not sufficiently managed through margin requirements and other methods, it has the potential to create significant losses for the clearinghouses. According to market participants, the jump-to-default risk posed by CDS makes determining sufficient margin requirements difficult. If a required level of margin is considered too high, whether justified or not, market participants may be less likely to use the clearinghouse. Although several groups have announced plans to create clearinghouses for CDS, none of the groups currently are clearing trades. First, as part of their efforts over the past year to improve the CDS market, FRBNY and several other regulators encouraged the industry to introduce central clearing of CDS contracts. The industry previously had begun moving toward the creation of a clearinghouse, and in July 2008, after FRBNY encouraged firms to develop clearinghouse proposals, several major dealers committed to launching a clearinghouse by December 2008. None are currently operational, however. At least four groups have developed clearinghouse options for CDS, two in the United States (IntercontinentalExchange and CME Group) and two in Europe (LIFFE and Eurex Clearing). LIFFE opened for clearing in December 2008 but has had virtually no business as of February 2009. Market participants and regulators identified advantages and disadvantages associated with having multiple clearinghouses clear CDS contracts. Some regulators noted that there could be advantages to having multiple clearinghouses at the early stages of development, particularly related to competition in designing and developing them. In addition, one market participant noted that with multiple clearinghouses the concentration of risk could be spread across multiple platforms. However, market participants suggested that having multiple clearing houses raised concerns about regulatory consistency in terms of setting standards and monitoring, especially for those in the U.S. and internationally. Market participants also indicated that multiple clearinghouses would create inefficiencies and remove some of the advantages gained from multilateral netting, because no single clearinghouse would enjoy the benefit of a complete portfolio of CDS. Moreover, participants would have to post collateral in multiple venues. Under current law, a clearing organization for CDS—or other OTC derivatives—must be regulated, but any of several regulators may provide that oversight. FRS, CFTC, and SEC all have played a role in establishing a clearinghouse, including reviewing proposals seeking regulatory approval. CME is registered as a derivatives clearing organization with CFTC. ICE has established its clearinghouse in a subsidiary FRS member bank—ICE Trust. LIFFE is regulated by FSA, and Eurex is overseen by the German Federal Financial Supervisory Authority. SEC has determined that the act of clearing CDS through a clearinghouse may result in the contracts being considered securities subject to the securities laws. To facilitate the clearing and settlement of CDS by clearinghouses, SEC issued an interim final rule on temporary and conditional exemptions in January 2009. SEC stated that the conditions of these exemptions would allow the agency to oversee the development of the centrally cleared CDS market and CDS exchanges and to take additional action as necessary. SEC has determined that LIFFE has met the conditions for the temporary exemptions from registration under the securities laws. The exemption expires in September 2009, at which time SEC officials believe they will be better situated to evaluate how these exemptions apply to the cleared CDS market. Given the overlapping jurisdiction and lack of regulatory clarity, FRS, CFTC, and SEC have signed a memorandum of understanding to ensure that each regulator applies similar standards across the different clearinghouse efforts. According to the regulators, the purpose of the memorandum is to foster cooperation and coordination of their respective approvals, ongoing supervision, and oversight of clearinghouses for CDS. Moreover, some said that the memorandum would help to prevent an individual regulator from taking a softer approach in its monitoring and oversight of required standards for clearinghouses, which could encourage more participants to use the less rigorously regulated clearinghouse. However, another regulator suggested that the memorandum still might not guarantee consistent application of clearinghouse standards and requirements, because each regulator had a different mission and approach to regulation. Market participants identified several disadvantages related to the current state of oversight for clearinghouses. Some market participants suggested that there had been a lack of clarity and certainty regarding oversight of clearinghouses because of the involvement of multiple regulators. As noted, some market participants questioned whether consistent standards and oversight would be applied across clearinghouses. Market participants and one regulator noted the importance of coordinating oversight internationally to ensure consistent global standards and mitigate the potential for regulatory arbitrage. Finally, some market participants suggested that having multiple regulators for a clearinghouse created the potential for regulatory overlap and related inefficiencies. Market observers and others have proposed other ideas to address concerns related to CDS, including (1) mandatory clearing, (2) mandatory exchange trading, (3) a ban on uncovered CDS, and (4) mandatory reporting of CDS trades. While these proposals would address some perceived problems with CDS markets, sources we interviewed identified important limitations and challenges for each of them. Mandatory clearing would ensure that CDS contracts benefited from the advantages of a clearinghouse, but regulators, market participants, and market observers explained that highly customized CDS would be impossible to clear because they lack the needed standardization. Mandatory exchange trading could offer improved price transparency and the benefits of clearing. But some market observers indicated that some CDS that were illiquid could not support an exchange and that the standardization of contracts would limit CDS’ risk management benefits. Banning or otherwise restricting uncovered CDS could limit activity that some observers believe contributed to the recent distress of financial institutions, yet proponents of uncovered CDS argue that banning these contracts would severely limit market liquidity and eliminate a valuable tool for hedging credit risk. Finally, some regulators and market observers believe that mandatory reporting of CDS trades to a central registry would increase transparency and provide greater certainty that information on all CDS was being captured in one place. However, some market participants suggested that detailed reporting of CDS trades should be limited to regulators so that positions were not exposed publicly, and some participants explained that a similar reporting system for bond markets had had adverse consequences that stifled that market. Regulators and the industry have initiated efforts to improve the operational infrastructure of OTC derivatives in general. However, each product has unique challenges because of differences in market maturity, volumes, and users, among other things. Despite these unique challenges, regulators, market participants, and observers told us that OTC derivatives, generally shared similar risks, such as operational and counterparty credit risks, and would benefit from initiatives to address those risks. As part of their efforts to improve the operational infrastructure of OTC derivatives markets, market participants have identified seven high-level goals: Global use of clearinghouse processing and clearing, Continuing portfolio compression efforts, Electronic processing of eligible trades (targets of the effort include equity, interest rate, and foreign exchange derivatives), Elimination of material confirmation backlogs, Risk mitigation for paper trades that are not electronically Streamlined trade life-cycle management, and Central settlement for eligible transactions. Some other OTC derivatives may also benefit from reductions in the amount of outstanding trades through portfolio compression efforts. FRBNY officials stated that they are looking at other OTC derivatives that had a critical mass of outstanding trades to determine whether they would benefit from compression. To the extent that further regulatory actions are explored for other OTC derivatives, regulators must consider the risks and characteristics of each class of OTC derivatives before taking additional actions. In closing, I would like to provide some final thoughts. While CDS have received much attention recently, the rapid growth in this type of OTC derivative more generally illustrates the emergence of increasingly complex products that have raised regulatory concerns about systemic risk. Bank regulators may have some insights into the activities of their supervised banks that act as derivatives dealers, but CDS, like OTC derivatives in general, are not regulated products, and the transactions are generally not subject to regulation by SEC, CFTC, or any other U.S. financial regulator. Thus, CDS and other OTC derivatives are not subject to the disclosure and other requirements that are in place for most securities and exchange-traded futures products. Although recent initiatives by regulators and industry have the potential to address some of the risks from CDS, these efforts are largely voluntary and do not include all CDS contracts. In addition, the lack of consistent and standardized margin and collateral practices continue to make managing counterparty credit risk and concentration risk difficult and may allow systemically important exposures to accumulate without adequate collateral to mitigate associated risks. This area is a critical one and must be addressed going forward. The gaps in the regulatory oversight structure of and regulations governing financial products such as CDS allowed these derivatives to grow unconstrained, and little analysis was done on the potential systemic risk created by their use. Regulators of major CDS dealers may have had some insights into the CDS market based on their oversight of these entities, but they had limited oversight of nonbank market participants, such as hedge funds, or subsidiaries of others like AIG, whose CDS activities partly caused its financial difficulties. This fact clearly demonstrates that risks to the financial system and even the broader economy can result from institutions that exist within the spectrum of supervised entities. Further, the use of CDS creates interconnections among these entities, such that the failure of any one counterparty can have widespread implications regardless of its size. AIG Financial Products, which had not been closely regulated, was a relatively small subsidiary of a large global insurance company. Yet the volume and nature of its CDS business made it such a large counterparty that its difficulty in meeting its CDS obligations not only threatened the stability of AIG but of the entire financial system as well. Finally, I would briefly like to mention what the current issues involving CDS have taught us about systemic risk and our current regulatory system. The current system of regulation lacks broad authority to monitor, oversee, and reduce risks to the financial system that are posed by entities and products that are not fully regulated, such as hedge funds, unregulated subsidiaries of regulated institutions, and other non-bank financial institutions. The absence of such authority may be a limitation in identifying, monitoring, and managing potential risks related to concentrated CDS exposures taken by any market participant. Regardless of the ultimate structure of the financial regulatory system, a systemwide focus is vitally important. The inability of the regulators to monitor activities across the market and take appropriate action to mitigate them has contributed to the current crisis and the regulators’ inability to effectively address its fallout. Any regulator tasked with a systemwide focus would need broad authority to gather and disclose appropriate information, collaborate with other regulators on rule making, and take corrective action as necessary in the interest of overall financial market stability, regardless of the type of financial product or market participant. For further information about this testimony, please contact Orice M. Williams on (202) 512-8678 or at williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Karen Tremba, Assistant Director; Kevin Averyt, Nadine Garrick, Akiko Ohnuma, Paul Thompson, and Robert Pollard. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.S. financial system is more prone to systemic risk today because (1) the current U.S. financial regulatory system is not designed to adequately oversee today's large and interconnected financial institutions, (2) not all financial activities and institutions fall under the direct purview of financial regulators, and (3) market innovations have led to the creation of new and sometimes complex products that were not envisioned as the current regulatory system developed. Credit default swaps (CDS) are one of the products that have assumed a key role in financial markets. My statement will discuss (1) the extent to which U.S. financial regulators and the UK regulator oversee CDS, (2) risks and challenges that CDS present to the stability of financial markets and institutions and similar concerns that other products may pose, and (3) the recent steps that financial regulators and the industry have taken to address risks pose by CDS and similar efforts that may be warranted for other financial products. GAO reviewed research studies and congressional testimonies. We interviewed financial regulators and a variety of financial market participants. In January 2009, GAO designated the financial regulatory system as a high-risk area in need of congressional attention. Issues involving systemic risk regulation in general and CDS in particular should be considered as part of that effort. The current regulatory structure for CDS does not provide any one regulator with authority over all participants in the CDS market, making it difficult to monitor and manage potential systemic risk. Federal oversight of CDS trading and monitoring of the CDS market are largely conducted through the banking regulators' safety and soundness oversight of supervised banks that act as CDS dealers. The Securities and Exchange Commission and the Commodity Futures Trading Commission lack the authority to regulate CDS broadly as financial products. Regulators have sought to address potential systemic risks arising from CDS activities mainly through collaborative efforts with other supervisors and key market participants. However, the extent to which regulators routinely monitor the CDS activity of unregulated market participants is unclear. The Financial Services Authority in the United Kingdom has authority over most CDS products and can collect information about the CDS market, but it has pursued most of its regulatory efforts in collaboration with U.S. regulators. CDS pose a number of risks to institutions and markets, many of which are not unique. These include counterparty credit, operational, concentration, and jump-to-default risks. Market participants and observers noted that CDS referencing asset-backed securities (ABS) and collateralized debt obligations (CDOs), particularly those related to mortgages, currently pose greater risks to institutions and markets than other types of CDS. Other risks and challenges from CDS relate to the lack of transparency in CDS markets, the potential for manipulation related to the use of CDS as a price discovery mechanism, and the use of CDS for speculative purposes. Regulators and market participants noted that over-the-counter (OTC) derivatives, to varying degrees, may pose some similar risks and a few identified equity derivatives as the OTC derivatives that were most similar to CDS. Financial regulators and market participants have initiated several efforts to mitigate these risks. These efforts target primarily operational and counterparty credit risks and include improving the operational infrastructure of CDS markets, creating a clearinghouse or central counterparty process to clear CDS trades, and establishing a central trade registry for CDS. If effectively implemented and sustained, these initiatives couldbegin to address some of the risks noted. But the effectiveness of these recent initiatives could be limited because participation is voluntary and regulators lack the authority to require all market participants to report their trades to a repository. Moreover, customized and highly structured CDS, which can include CDS with complex reference entities that may present additional risks, generally lack the standardization necessary for centralized clearing. Other ideas to reform CDS markets, such as mandatory clearing or limiting some types of trades, have important limitations that would need to be addressed. Finally, many participants and observers agreed that OTC derivatives other than CDS generally share some of the same risks and could benefit from similar efforts to mitigate their impact.
In March 2003 DHS assumed operational control of about 209,000 civilian and military positions from 22 agencies and offices. Not since the creation of the Department of Defense in 1947 has the federal government undertaken a transformation of this magnitude. As we have previously reported, such a transformation poses significant management and leadership challenges, including those associated with coordinating and facilitating the sharing of information, both among its component agencies and with other entities, and integrating numerous mission support, administrative, and infrastructure IT systems. Critical to DHS’s ability to meet this challenge is the establishment of an effective IT governance mechanism, including IT plans, processes, and people. The Homeland Security Act of 2002 created DHS by merging agencies that specialize in one or more interrelated and interdependent aspects of homeland security, such as intelligence analysis, law enforcement, border security, transportation security, biological research, critical infrastructure protection, and disaster recovery. DHS is in the early stages of transforming and integrating this disparate group of agencies with multiple missions, values, and cultures into a strong and effective cabinet department. The effective interaction, integration, and synergy of these agencies are critical to homeland security mission performance. DHS’s mission is to lead the unified national effort to secure America by preventing and deterring terrorist attacks and protecting against and responding to threats and hazards to the nation. DHS also is to ensure safe and secure borders, welcome lawful immigrants and visitors, and promote the free flow of commerce. To accomplish this, the Homeland Security Act established five under secretaries with responsibilities over directorates for management, science and technology, information analysis and infrastructure protection, border and transportation security, and emergency preparedness and response (see fig. 1). In addition to these directorates, the U.S. Secret Service and the U.S. Coast Guard continue as distinct entities within DHS. Each DHS directorate is responsible for its specific homeland security mission area and for coordinating related efforts with its sibling components, as well as other external entities. Within the Management directorate is the Office of the CIO, which is expected to enhance mission success by leveraging best available information technologies and technology-management practices, provide shared services and coordinate acquisition strategies to minimize cost and improve consistency, support executive leadership in performance-based management by maintaining an enterprise architecture that is fully integrated with other management processes, and advocate and enable business transformation in support of enhanced homeland security. Other DHS entities also are responsible, or share responsibility, for critical information and technology management activities. For example, within DHS’s major organizational offices (e.g., the directorates) are CIOs and IT organizations. Control over the department’s IT budget is vested primarily with the CIO organizations within each of its component organizations, and the component CIO organizations are accountable to the heads of DHS’s respective organizational components. Moreover, we have previously reported on the responsibilities held by various DHS directorates to ensure successful information sharing within the department and between federal agencies, state and local governments, and the private sector. The DHS CIO established a CIO Council, chaired by the CIO and composed of component-level CIOs, that serves as a focal point for coordinating challenges that cross agency boundaries. According to its charter, the specific functions of the DHS CIO Council include establishing a strategic plan and setting priorities for departmentwide defining and continuously improving DHS IT governance structures and advancing DHS IT priorities through well-defined road maps that detail identifying opportunities for sharing resources, coordinating multibureau projects and programs, and consolidating activities; and developing and executing formal communication programs for internal and external constituencies. As we have previously reported, information and technology management is a key element of management reform efforts that can help dramatically reshape government to improve performance and reduce costs. Accordingly, it is critical that agencies manage their information resources effectively, taking into account the need to address planning, processes, and people. Key components of an effective information and technology management structure include (1) IT strategic planning, (2) enterprise architecture, (3) IT investment management, (4) systems development and acquisition management, (5) information security management, (6) information management, and (7) IT human capital management (see fig. 2). Morever, effective implementation of information and technology management recognizes the interdependencies among these processes. Illustrations of some of these relationships are as follows: IT strategic planning defines what an agency seeks to accomplish and identifies the strategies that it will use to achieve desired results. The IT strategic plan, which is the outcome of this effort, is executed using the processes established through the other components of the information and technology structure, such as IT investment management. An organization’s IT human capital approach must be aligned to support the mission, vision for the future, core values, goals and objectives, and strategies, which may be found in the IT strategic plan and the enterprise architecture. IT human capital management, in turn, ensures that the right people are in place with the right skills to perform critical system acquisition functions. The enterprise architecture is an integral component of the IT investment management process because an organization should approve only those investments that move the organization toward the target architecture. A critical aspect of systems acquisition and development management is ensuring that robust information security is built into the projects early and is periodically revisited. Privacy—a component of information management—should be a consideration when acquiring and developing systems. For example, the E-Government Act of 2002 requires agencies to conduct privacy impact assessments before developing or acquiring IT systems that collect, maintain, or disseminate information that is personally identifiable to an individual. Such assessments would, in part, include what information is being collected, why it is being collected, and its intended use. In addition, ensuring that such personally identifiable data is secured against risks such as loss or unauthorized access, destruction, use, modification, or disclosure is an internationally recognized privacy principle. DHS has recognized the importance of information and technology management to achieving its mission. In February of this year, it issued its first strategic plan, which outlines seven strategic goals. One of these goals is organizational excellence, which includes information and technology management objectives related to privacy and security and electronic government modernization and interoperability initiatives. In addition, at its various components, DHS has numerous ongoing major systems development and acquisition initiatives related to meeting mission needs, such as the following: Border and Transportation Security Directorate. The Automated Commercial Environment (ACE) project is to be a new trade processing system. Border and Transportation Security Directorate. CAPPS II is to identify airline passengers who pose a security risk before they reach the passenger screening checkpoint. Border and Transportation Security Directorate. The Student Exchange Visitor Information System (SEVIS) is expected to manage information about nonimmigrant foreign students and exchange visitors from schools and exchange programs. Border and Transportation Security Directorate. The United States Visitor and Immigrant Status Indicator Technology (US-VISIT) is a governmentwide program intended to improve the nation’s capacity for collecting information on foreign nationals who travel to the United States, as well as control the pre-entry, entry, status, and exit of these travelers. Coast Guard. Rescue 21 is to replace the Coast Guard’s 30-year-old search and rescue communication system. Science and Technology Directorate. Project SAFECOM has the overall objective of achieving national wireless communications interoperability among first responders and public safety systems at all levels of government. In the 18 months that it has been in operation, DHS has taken steps to institute key elements of an effective information and technology management structure. However, DHS’s progress has been mixed in that some elements are further advanced than others and there is still considerable work remaining to institutionalize each of the areas across the department. An example of the former is that DHS established several key practices related to building an effective IT investment management process, whereas fundamental activities in the IT human capital area have not been started. IT strategic planning can serve as an example of the considerable amount of work remaining within individual elements of the information and technology management structure. Specifically, although DHS issued a draft IRM strategic plan this past March, it and other strategic planning documents do not contain sufficient information regarding the department’s IT goals, how it will achieve them, and when it expects that significant activities will be completed. DHS’s mixed progress is not unexpected given the diversity of the inherited agencies and the size and complexity of the department and the daunting hurdles that it faces in integrating the systems and IT management approaches of its many organizational components. Nevertheless, new and existing IT investments continue to be pursued without a fully defined and implemented departmentwide governance structure, which increases the risk that they will not completely or optimally support the department’s mission and objectives. To address the risks associated with DHS’s departmental structures and specific IT investments, we have made recommendations to the DHS CIO and other responsible entities—such as the Coast Guard and TSA—to help the department successfully overcome its information and technology management challenge. In some cases, the department has implemented or begun to implement these recommendations. Strategic planning defines what an organization seeks to accomplish and identifies the strategies it will use to achieve desired results. In addition, the Paperwork Reduction Act requires that agencies indicate in strategic IRM plans how they are applying information resources to improve the productivity, efficiency, and effectiveness of government programs. Further, Office of Management and Budget (OMB) Circular A-130 states that strategic IRM plans should support agency strategic plans and provide a description of how IRM helps accomplish agency missions. This plan serves as a vision or road map for implementing effective management controls and marshalling resources in a manner that will facilitate leveraging of IT to support mission goals and outcomes. It should be tied to and support the agency strategic plan and provide for establishing and implementing IT management processes. DHS’s draft IRM strategic plan dated March 2004, provides a high-level description of how IT supports the goals of the agency’s strategic plan. According to the draft plan, although the department’s component agencies have advanced their separate uses of information technology and services, serious gaps exist between the current and target environment necessary to support effective integration of information and collaboration of actions. The plan goes on to discuss steps taken in the investment management, enterprise architecture, and security disciplines. The draft IRM plan also cites eight DHS CIO Council priorities for 2004; namely, (1) information sharing, (2) mission rationalization, (3) IT security, (4) one IT infrastructure, (5) enterprise architecture, (6) portfolio management, (7) governance, and (8) IT human capital. DHS is in the process of developing road maps for each of the CIO Council’s priorities. These road maps are currently in draft and generally include a description of the current condition of the area, the need for a change, the planned future state, initiatives, and barriers. Currently, neither the draft IRM strategic plan nor the draft priority area road maps contain sufficient information regarding the department’s IT goals and performance measures, when the department expects that significant activities will be completed, and the staff resources necessary to implement these activities. For example: Neither the draft IRM strategic plan nor the draft road maps include fully defined goals and performance measures. Leading organizations define specific goals, objectives, and measures, use a diversity of measurement types, and describe how IT outputs and outcomes affect organizational customer and agency program delivery requirements. In addition, the Paperwork Reduction Act and the Clinger-Cohen Act of 1996 require agencies to establish goals and performance measures on how information and technology management contributes to program productivity, the efficiency and effectiveness of agency operations, and service to the public. The draft IRM plan does not include milestones for when major information and technology management activities will be initiated or completed. In addition, the milestones in the draft road maps are generally vague (e.g., using terms like short term and long term without defining them or including specific months with no year). Without milestone information, meaningful measurement of progress is not possible. This is particularly important since DHS did not always meet the target dates laid out by the CIO in February 2003. For example, the CIO planned to introduce a balanced scorecard for the DHS IT community in the department’s first year. Although the draft IRM strategic plan states that the DHS CIO Council has endorsed the use of a balanced scorecard approach, as of mid-July, this scorecard had not been developed. The plan does not address whether, or to what extent, DHS has staff with the relevant skills to obtain its target environment and, if it does, whether they are allocated appropriately. This is particularly important since the DHS CIO Council has targeted IT human capital as a priority area and, according to the draft road map document associated with this priority, DHS is facing such issues as an aging IT workforce and too little investment in continuous learning. The DHS CIO noted that the draft IRM strategic plan, the department’s initial attempt at IT strategic planning, was primarily intended to meet OMB’s requirements that a plan be developed. He further stated that through the development of the road maps, DHS is defining the operational details for its IT priority areas, which, in turn, will be used to update and improve the next version of the IRM plan. In responding to a draft of this report, DHS stated that the CIO intends to issue an IT strategic plan before the end of the calendar year and that, over the next few months, each priority area will develop goals, performance measures, and time lines for implementation. A key emphasis of version 1.0 of the DHS draft IRM plan is its recognition of the importance of the department’s integration efforts and its description of its plan to implement a single IT infrastructure. In particular, to maximize its mission performance, DHS faces the enormous task of integrating and consolidating a multitude of systems. This includes exploiting opportunities to eliminate and consolidate systems in order to improve mission support and reduce system costs. We recently reported that DHS is in the process of developing its systems integration strategy and that, in the interim, the department has taken steps to address ongoing and planned component IT investments integration and alignment with its evolving strategic IT management framework. However, we concluded that while these steps have merit, they do not provide adequate assurance of strategic alignment across the department. For example, one step simply continued the various approaches that produced the diverse systems that the department inherited, while another relied too heavily on oral communication about complex IT strategic issues that are not yet fully defined. Thus, DHS has an increased risk that its component agencies’ ongoing investments, collectively costing billions of dollars in fiscal year 2004, will need to be reworked at some future point to be effectively integrated and to maximize departmentwide value. Moreover, we reported that the DHS CIO does not have authority and control over departmentwide IT spending, even though such control is important for effective systems integration. According to our research on leading private and public sector organizations and experience at federal agencies, leading organizations adopt and use an enterprisewide approach under the leadership of a CIO or comparable senior executive who has the responsibility and authority, including budgetary and spending control, for IT across the entity. To help DHS better manage the risks that it faces, we made several recommendations, including that the Secretary examine the sufficiency of IT spending authority vested in the CIO and take appropriate steps to correct any limitations in authority that constrain the CIO’s ability to effectively integrate IT investments in support of departmentwide mission goals. In commenting on a draft of this report, DHS did not address whether it would implement these recommendations. Effective use of enterprise architectures is a trademark of successful public and private organizations. For a decade, we have promoted the use of architectures to guide and constrain systems modernization, recognizing them as a crucial means to a challenging goal: establishing agency operational structures that are optimally defined in both business and technological environments. The Congress, OMB, and the federal CIO Council have also recognized the importance of an architecture-centric approach to modernization. The Clinger-Cohen Act of 1996 mandates that an agency’s CIO develop, maintain, and facilitate the implementation of IT architectures. This should provide a means of managing the integration of business processes and supporting systems. Further, the E-Government Act of 2002 requires OMB to oversee the development of enterprise architectures within and across agencies. Generally speaking, an enterprise architecture connects an organization’s strategic plan with program and system solution implementations by providing the fundamental information details needed to guide and constrain implementable investments in a consistent, coordinated, and integrated fashion. An enterprise architecture provides a clear and comprehensive picture of an entity, whether it is an organization (e.g., federal department) or a functional or mission area that cuts across more than one organization (e.g., homeland security). This picture consists of snapshots of both the enterprise’s current or “As Is” operational and technological environment and its target or “To Be” environment, as well as a capital investment road map for transitioning from the current to the target environment. These snapshots further consist of “views,” which are basically one or more architecture products that provide conceptual or logical representations of the enterprise. For the last 2 years, we have promoted the development and use of a homeland security enterprise architecture. For example, in June 2002 we testified on the need to define the homeland security mission and the information, technologies, and approaches necessary to perform this mission in a way that is divorced from organizational parochialism and cultural differences. We also stressed that a particularly critical function of a homeland security architecture would be to establish processes and information/data protocols and standards that could facilitate information collection and permit sharing. Recognizing the pivotal role that an architecture will play in successfully merging the diverse operating and systems environments that the department inherited, DHS issued an initial version in September 2003. Our recent report on this initial enterprise architecture found that it provides a partial basis upon which to build future versions. However, the September 2003 version of the enterprise architecture is missing most of the content necessary to be considered a well-defined architecture. Moreover, the content in this version was not systematically derived from a DHS or national corporate business strategy, but rather was more the result of an amalgamation of the existing architectures that several of DHS’s predecessor agencies already had, along with their respective portfolios of system investment projects. Such a development approach is not consistent with recognized architecture development best practices. DHS officials agreed with our content assessment of their initial architecture, stating that it is largely a reflection of what could be done without a departmental strategic plan to drive architectural content and with limited resources and time. They also stated that the primary purposes in developing this version were to meet an OMB deadline for submitting the department’s fiscal year 2004 IT budget request and for the department to develop a more mature understanding of enterprise architecture and its ability to execute an approach and methodology for developing and using the next version of the architecture. Nevertheless, we concluded that DHS does not yet have the architectural content that it needs to effectively guide and constrain its business transformation efforts and the hundreds of millions of dollars it is investing in supporting systems. For example, the architecture does not (1) include a description of the information flows and relationships among organizational units, business operations, and system elements; (2) provide a description of the business and operational rules for data standardization to ensure data consistency, integrity, and accuracy; or (3) include an analysis of the gaps between the baseline and target architecture for business processes, information/data, and services/application systems to define missing and needed capabilities. Moreover, the architecture does not adequately recognize the interdependencies with other critical IT management processes since it does not include (1) a description of the policies, procedures, processes, and tools for selecting, controlling, and evaluating application systems to enable effective IT investment management and (2) a description of the system development lifecycle process for application development or acquisition and the integration of the process with the architecture. In addition, although the architecture recognizes the need for a governance structure and contains a high-level discussion of same, it does not include an architecture governance and control structure and the integrated procedures, processes, and criteria (e.g., investment management and security) to be followed. Without such content, DHS runs the risk that its investments will not be well integrated, will be duplicative, will be unnecessarily costly to maintain and interface, and will not effectively optimize mission performance. To assist DHS in developing a well-defined enterprise architecture, our August report contained numerous recommendations directed to the architecture executive steering committee—composed of senior executives from technical and business organizations across the department—in collaboration with the CIO, that are aimed at ensuring that the needed content is added and that the approach followed adheres to best practices. Given DHS’s intended purpose of its enterprise architecture, which is to use it as the basis for departmentwide (and national) operational transformation and to support systems modernization and evolution, it is important that individual IT investments be aligned with the architecture. Moreover, according to the CIO, DHS is developing a process to align its systems modernization activities with its enterprise architecture. However, earlier this year, we reported that this alignment had not been determined for two of the department’s major investments—ACE and US-VISIT—but the CIO and program officials stated that they planned to address this issue. Investments in IT can have a dramatic impact on an organization’s performance. If managed effectively, these investments can vastly improve government performance and accountability. If not, they can result in wasteful spending and lost opportunities for improving delivery of services to the public. An IT investment management process provides a systematic method for agencies to minimize risks while maximizing return on investment. A central tenet of the federal approach to IT investment management has been the select/control/evaluate model. During the select phase, the organization (1) identifies and analyzes each project’s risks and returns before committing significant funds and (2) selects those projects that will best support its mission needs. In the control phase, the organization ensures that the project continues to meet mission needs at the expected levels of cost and risks. If the project is not meeting expectations or if problems have arisen, steps are quickly taken to address the deficiencies. During the evaluate phase, actual versus expected results are compared after a project has been fully implemented. DHS has developed and begun implementing a departmental IT investment management process. In May 2003 DHS issued an investment review management directive and IT capital planning and investment control guide, which provide the department’s component organizations with requirements and guidance on documentation and review of IT investments. In February 2004, we reported that DHS’s investment management process was evolving. Since that time, DHS has changed its process to reflect lessons learned during the department’s first year of operation and continuous improvement of the process. Moreover, DHS issued a new interim IT capital planning and investment control guide in May 2004 and is in the process of revising the investment review management directive to reflect the changes that have been made. Among the changes is a shifting of responsibilities of some of its investment management boards and increases to the thresholds that determine which board approves an investment. Figure 3 illustrates the governance boards DHS uses to execute its investment review process. Under this process, DHS has four levels of investments, the top three of which are subject to review by department- level boards—the Investment Review Board, Joint Requirements Council, and Enterprise Architecture Board. (App. I provides more specific information on the boards and their responsibilities.) In addition, DHS has established a five-phase review process that calls for these investments to be reviewed at key decision points, such as program authorization (see fig. 4). With the establishment of the governance boards and the investment review process, DHS has established several key practices associated with building the investment foundation as described by our IT investment management framework. In addition, as part of the selection phase of its capital planning and investment control process, DHS reviewed component agency IT investments for its fiscal year 2005 budget submission. Specifically, according to DHS IT officials, (1) the CIO approved the department’s IT portfolio and (2) all of the major IT systems submitted to OMB for the fiscal year 2005 budget were assessed and scored by an investment review team. In addition, earlier this year, as we reported, with the department’s establishment of the department’s top investment management board, the ACE and CAPPS II investments met legislative conditions contained in the Department of Homeland Security Appropriations Act, 2004 (P.L. 108-90). For example, in February 2004 we reported that that creation of the Investment Review Board satisfied a CAPPS II legislative requirement associated with the establishment of an oversight board, with the caveat that the board oversee the program on a regular and thorough basis. In addition, in May 2004 we reported that DHS satisfied a prior recommendation of ours to establish and charter an executive body to guide and direct the US-VISIT program by establishing a three-entity governance structure, which includes the department’s Investment Review Board. Although DHS has made noticeable progress, it still has much work remaining to fully implement its IT investment management process, particularly as it relates to carrying out effective departmental control over IT investments. For example: Many of DHS’s IT investments have not undergone control reviews. As of early July, one or more of DHS’s investment management boards had reviewed less than a quarter of the major IT investments subject to departmental review (level 1, 2, and 3 investments). According to the coordinator of this process, the investments that have undergone control reviews were considered DHS’s highest priority IT investments based on criteria such as cost, visibility, or that a key decision point was forthcoming. In addition, DHS stated that its ability to complete control reviews in a timely manner is affected by the amount of resources, people, time, and funding allocated to the department. Nevertheless, our reviews of several DHS level 1 investments indicate the importance of such reviews, since we have found cost, schedule, and performance problems as well as significant management activities that have not been completed. DHS has not established a process to ensure that control reviews of IT investments are performed in a timely manner. Our February 2004 report recommended that the DHS CIO develop a control review schedule for IT investments, subject to departmental oversight. DHS concurred with this recommendation, but has not yet implemented it. However, for the fiscal year 2006 budget cycle, which is being formulated now, DHS entities were asked to provide the dates of prior and future key decision points for each major IT investment. According to Office of the CIO capital planning and investment control officials, this is their first step toward building a control review schedule. Officials from DHS’s offices of the CIO and chief financial officer characterized the department’s investment management process as still maturing. For example, Office of the CIO capital planning and investment control officials stated that the department will be concentrating on developing and building a disciplined and structured control process in fiscal year 2005. Officials from the offices of the CIO and chief financial officer also described various initiatives that are being undertaken to improve this process. For example, portfolio management is a CIO Council priority and, according to the draft road map for this priority, the planned future environment will have IT investments aligned and optimized against mission requirements at the DHS level. DHS has procured an automated portfolio management system to help in this endeavor. According to Office of the CIO capital planning and investment control officials, DHS has inserted its fiscal year 2005 business cases for major investments (also known as budget exhibit 300s) into this system and plans to add the fiscal year 2006 business cases later this year. In addition, according to these officials, the department’s Investment Review Team plans to use this system to perform portfolio analysis to provide additional insight to DHS investment management boards as they make their investment selections for fiscal year 2006. Our work and other best-practice research have shown that applying rigorous management practices to the development and acquisition of IT systems and the acquisition of IT services improves the likelihood of delivering expected capabilities on time and within budget. In other words, the quality of IT systems and services is largely governed by the quality of the management processes involved in developing and acquiring them. DHS has numerous ongoing major systems development and acquisition initiatives that are critical to meeting its mission needs. Our reviews of several major DHS systems development and acquisition efforts have found that these rigorous processes are not always employed. We have made numerous recommendations that address a variety of system development and acquisition issues. DHS has generally agreed with these recommendations and, in some cases, has implemented, or begun to implement, them. For example: Process controls for acquiring software-intensive systems. Disciplined processes for acquiring software are essential to software-intensive system acquisitions. The Software Engineering Institute at Carnegie Mellon University has defined the tenets of effective software acquisition, which identify, among other things, a number of key process areas that are necessary to effectively manage software-intensive system acquisitions. In the past, we have reported that such key processes had not been fully implemented for ACE and US-VISIT. Consequently, we made recommendations for both of these programs related to instituting acquisition process controls called for in the Software Engineering Institute’s SA-CMM® model. As of May of this year, the acquisition control recommendation had been implemented by the ACE program in that the Software Engineering Institute had assigned the program a level 2 rating, meaning that it had established basic acquisition management processes. Also in May of this year we reported that US-VISIT was planning to implement our recommendation on instituting acquisition process controls. Managing and conducting testing. Complete and thorough testing is essential to providing reasonable assurance that new or modified systems process information correctly and will meet an organization’s business needs. According to leading IT organizations, to be effective, software testing practices should be planned and conducted in a structured and disciplined fashion. We have expressed concerns about testing and issued related recommendations for three DHS IT investments—Rescue 21, CAPPS II, and US-VISIT. For example, in September 2003 we reported that the Coast Guard planned to compress and overlap the testing schedules for Rescue 21, which increased its risk that, for instance, all requirements would not be tested during formal qualification testing, system integration testing, and operational testing and evaluation. To mitigate Rescue 21 risks, we made recommendations to the Coast Guard related to establishing a new testing schedule and ensuring that milestones are established for completing test plans and that these plans address all requirements of the system. The Coast Guard agreed with these recommendations, which the agency has begun to implement. In the cases of CAPPS II and US-VISIT, we made recommendations to TSA and the Border and Transportation Security Directorate, respectively, covering system and database testing and developing and approving complete test plans before testing begins, respectively. DHS generally concurred with these recommendations. Measuring the performance of a system. By using comprehensive performance information, more informed decisions can be made about IT investments. An effective performance measurement system produces information that (1) provides an early warning indicator of problems and the effectiveness of corrective actions, (2) provides input to resource allocation and planning, and (3) provides periodic feedback about the quality, quantity, cost, and timeliness of products and services. We have reported on a variety of performance measure concerns associated with five DHS IT investments and have made relevant recommendations. For example, in February 2004, we reported that TSA had established preliminary goals and measures for CAPPS II but that they could be strengthened. We also noted that TSA had not fully established policies and procedures to monitor and evaluate the use and operation of the system. Similarly, our review of SEVIS, which is operational, found that several key system performance requirements were not being formally measured. This is problematic because without formally monitoring and documenting key system performance requirements, DHS cannot adequately ensure that potential system problems are identified and addressed early, before they have a chance to become larger and affect the DHS mission objectives supported by SEVIS. In addition to our recommendations related to specific DHS IT investments, we have also issued guidance to assist agencies in improving their systems development and acquisitions. Since 1997 we have designated information security as a governmentwide high-risk issue because of continuing evidence indicating significant, pervasive weaknesses in the controls over computerized federal operations. Moreover, related risks continue to escalate, in part due to the government’s increasing reliance on the Internet and on commercially available information technology. Government officials are increasingly concerned about attacks launched by individuals and groups with malicious intent, such as crime, terrorism, foreign intelligence gathering, and acts of war. In addition, the disgruntled organization insider is a significant threat, since such individuals often have knowledge that allows them to gain unrestricted access and inflict damage or steal assets without possessing a great deal of knowledge about computer intrusions. Based on its annual evaluation required by the Federal Information Security Management Act of 2002, in September 2003 the DHS Office of Inspector General reported that DHS had made progress in establishing a framework for an IT systems security program. For example, DHS has (1) appointed a chief information security officer, (2) developed and disseminated information system security policies and procedures, (3) implemented an incident response and reporting process, (4) initiated a security awareness training program, and (5) established a critical infrastructure protection working group. However, the inspector general report concluded that still more needs to be done to ensure the security of DHS’s IT infrastructure and prevent disruptions to mission operations. For example, DHS did not have a process to ensure that all plans of action and milestones for identified weaknesses were developed, implemented, and managed. In responding to a draft of this report, DHS stated that it has instituted a tool to monitor each organizational element’s progress in developing and achieving the milestones identified in the plans of action and milestones. In addition, the Office of Inspector General stated that none of the DHS components had a fully functioning IT security program and a number of key security areas needed attention. For example, less than half of DHS’s systems had a security plan and been assessed for risk. Among the Office of Inspector General’s recommendations were that the CIO (1) develop and implement a process to identify information security-related material weaknesses in mission-critical programs and systems, (2) implement an oversight and reporting function to track the progress of remediation of material weaknesses, and (3) require DHS information officers to assign information systems security officers to oversee the security controls of each major application and general support system. More recently, the DHS Office of Inspector General reported that DHS cannot ensure that the sensitive information processed by its wireless systems is effectively protected from unauthorized access and potential misuse. In particular, the Inspector General reported that DHS had not (1) provided sufficient guidance on wireless implementation to its components, (2) established adequate security controls to protect its wireless networks against commonly known security vulnerabilities, and (3) certified or accredited its wireless networks. The Inspector General made several recommendations to address the deficiencies cited in the report, which the DHS CIO agreed to and has taken steps to implement. In addition, we have long held that it is important that security be addressed in the early planning stages of the development of IT systems, and have reported on security planning in the US-VISIT and CAPPS II programs. For example, in June 2003 we recommended that the US-VISIT program manager develop a system security plan and in May 2004 we reported that this recommendation had been partially implemented. Specifically, DHS provided a draft security plan, but this plan did not include (1) specific controls for meeting the security requirements, (2) a risk assessment methodology, or (3) the roles and responsibilities of individuals with system access. DHS reported four departmentwide information security-related material weaknesses in its fiscal year 2003 Performance and Accountability Report. For example, DHS reported that it had (1) limited tracking, evaluation, and reporting tools necessary to provide oversight over its information security efforts and (2) insufficient resources, processes, policies, and guidelines in place to ensure the identification, protection, and continuity of services to reduce the department’s vulnerabilities and risks and to sustain mission-critical functions in the event of a man-made or natural disaster. According to the DHS report, the department plans to take corrective actions related to these material weaknesses by September 30, 2004. The DHS CIO Council has also pronounced information security a priority area. The draft road map associated with this area includes various short- , mid- , and long-term initiatives. Moreover, to lay a foundation for departmental improvements in information security management, DHS has developed an information security program strategic plan, which identifies major program areas, goals, and objectives. According to this April 2004 plan, these major security program areas allow DHS to implement and maintain information security as part of its capital investment control process, systems development life cycle, and the enterprise architecture, and are essential to providing security services that protect the confidentiality, integrity, and availability of information and to provide accountability for activities on DHS networks and computing platforms. As agencies increasingly move to an operational environment in which electronic—rather than paper—records provide comprehensive documentation of their activities and business processes, a variety of information collection, use, and dissemination issues have emerged. Such issues are particularly relevant to DHS because the Homeland Security Act of 2002 and federal policy assign responsibilities to the department for the coordination and sharing of information related to threats of domestic terrorism—within the department and with and among other federal agencies, state and local governments, the private sector, and other entities. Among the information management issues facing DHS are information sharing, privacy, and compliance with the information collection requirements. Namely: Information sharing. As we have reported, information sharing is critical to successfully addressing increasing threats and fulfilling the missions of DHS. For example, to accomplish its missions, the department must (1) access, receive, and analyze law enforcement information, intelligence information, and other threat, incident, and vulnerability information from federal and nonfederal sources, and (2) analyze such information to identify and assess the nature and scope of terrorist threats. Further, DHS must share information both internally and externally with agencies and law enforcement on such matters as goods and passengers inbound to the United States and individuals who are known or suspected terrorists and criminals. It also must share information among emergency responders in preparing for and responding to terrorist attacks and other emergencies. We have made numerous recommendations over the last several years related to information-sharing functions that have been transferred to DHS, which are focused on sharing information on incidents, threats, and vulnerabilities and providing warnings related to critical infrastructures, both within the federal government and between the federal government and state and local governments and the private sector. In September 2003 we testified that although progress has been made in addressing our recommendations, further efforts were needed, such as (1) improving the federal government’s capabilities to analyze incident, threat, and vulnerability information obtained from numerous sources and share appropriate timely, useful warnings and other information concerning both cyber and physical threats to federal entities, state and local governments, and the private sector, and (2) developing a comprehensive and coordinated national plan to facilitate information sharing on critical infrastructures. More recently, in July 2004 we reported that DHS’s ability to gather, analyze, and disseminate information could be improved by developing information sharing-related policies and procedures for its components. In commenting on a draft of this report, DHS provided planned actions in response to its recommendations. The DHS Secretary has recognized the criticality of information sharing in the department’s strategic plan. In addition, information sharing is one of the DHS CIO Council’s priorities in 2004. In the draft road map associated with this priority area, DHS described a future state that includes seamless access and dissemination of information in real time or near real time, that information is shared with all constituents, at all levels of government, and with the private sector, and that there are agreed-upon data standardization rules. We have issued guidance on information-sharing practices of organizations that successfully share sensitive or time-critical information, which could aid DHS in its efforts. Privacy. With the emphasis on information sharing, privacy issues have emerged as a major, and contentious, concern. Since the terrorist attacks of September 11, 2001, data mining has been seen increasingly as a useful tool to help detect terrorist threats by improving the collection and analysis of public and private-sector data. Our May 2004 governmentwide report on data mining described 14 data mining efforts reported by DHS. Mining government and private databases containing personal information creates a range of privacy concerns because agencies can quickly and efficiently obtain information on individuals or groups by exploiting large databases containing personal information aggregated from public and private records. Concerns have also been raised about the quality and accuracy of the mined data; the use of the data for other than the original purpose for which the data were collected without the consent of the individual; the protection of the data against unauthorized access, modification, or disclosure; and the right of individuals to know about the collection of personal information, how to access that information, and how to request a correction of inaccurate information. In April 2003, DHS appointed its first chief privacy officer. According to this officer, among other things, the DHS privacy office promotes best practices with respect to privacy, guides DHS agencies in developing appropriate privacy policies, and serves as a resource for questions related to privacy and information collection and disclosure. Privacy concerns have also been a critical factor in the development and acquisition of US-VISIT and CAPPS II. With respect to CAPPS II, the 2004 DHS appropriations act designated privacy as one of eight key issues that TSA must address before CAPPS II is deployed or implemented. In our February 2004 report on whether TSA had fulfilled these legislative requirements, we stated that the agency’s plans appear to address many of the requirements of the Privacy Act, the primary legislation that regulates the government’s use of personal information. However, while TSA had taken initial steps, it had not finalized its plans for complying with the Privacy Act. We also looked at the TSA’s plans in the larger context of eight Fair Information Practices, which are internationally recognized privacy principles that include practices such as data quality and security safeguards. The TSA’s plans reflect some actions to address each of these practices. However, to meet its evolving mission goals, the agency also appears to limit the application of some of these practices. This reflects TSA’s efforts to balance privacy with other public policy interests, such as national security, law enforcement, and administrative efficiency. Compliance with the information collection requirements of the Paperwork Reduction Act. The Paperwork Reduction Act prohibits an agency from conducting or sponsoring the collection of information unless (1) the agency has submitted the proposed collection and other documents to OMB, (2) OMB has approved the proposed collection, and (3) the agency displays an OMB control number on the collection. We testified in April 2004 that DHS had 18 reported violations of the Paperwork Reduction Act in fiscal year 2003, all related to OMB approvals that had expired and had not been reauthorized. Our work with leading organizations shows that they develop human capital strategies to assess their skill bases and recruit and retain staff who can effectively implement technology to meet business needs. They assess their IT skills on an ongoing basis to determine what expertise is needed to meet current responsibilities and support future initiatives and evaluate the skills of their current employees, which are then compared against the organization’s needed skills to determine gaps in the IT skills base. The challenges the federal government faces in maintaining a high- quality IT workforce are long-standing and widely recognized. The success of the transformation and implementation of DHS is based largely on the degree to which human capital management issues are addressed. We have issued several reports examining how DHS plans to implement its new human capital system. For example, in June 2004 we reported that DHS had begun strategic human capital planning efforts at the headquarters level since the release of the department’s overall strategic plan and the publication of proposed regulations for its new human capital management system. However, DHS had not yet systematically gathered relevant human capital data at the headquarters level, although efforts were under way to collect detailed human capital information and design a centralized information system so that such data could be gathered and reported departmentwide. These strategic human capital planning efforts can enable DHS to remain aware of and be prepared for current and future needs as an organization. It is important that DHS address its IT human capital challenges expeditiously since, according to the DHS CIO, the biggest obstacle to the implementation of a departmentwide systems integration strategy has been insufficient staffing. More specifically, the CIO said that his office received substantially fewer staff than he requested when the department was originally established in 2003. To illustrate his statement, the CIO said that after studying other comparably sized federal department CIO organizations, he requested approximately 163 positions. However, he said that his office received about 65 positions. In addition, CIO officials told the Office of Inspector General that, given the relatively small staff resources provided, they have been “busy putting out fires” and, as a result, have been hindered in carrying out some critical IT management responsibilities, including instituting central guidance and standards in areas such as information security and network management. Lastly, the DHS CIO also noted the lack of properly skilled IT staff within the component agencies. Challenges facing DHS in this area, he stated, include overcoming political and cultural barriers, leveraging cultural beliefs and diversity to achieve collaborative change, and recruiting and retaining skilled IT workers. In addition, we have expressed concerns about human capital issues related to two of DHS’s major IT investments, ACE and US-VISIT. In May 2002 we reported that the program office managing ACE did not have the people in place to perform critical system acquisition functions, which increased the risk that promised system capabilities would not be delivered on time or within budget. Accordingly, we recommended that a human capital management strategy be immediately implemented for this office. Two years later we reported that U.S. Customs and Border Protection is in the process of implementing this recommendation. In particular, the program office had developed and begun implementing a human capital management plan, but the office has continued to experience difficulty in filling key positions. The ACE program office has begun implementing a new staffing plan intended to address DHS’s concern that the program office has insufficient government program management staff. We have reported on similar IT human capital problems associated with US-VISIT and recommended that it develop and implement a human capital strategy, which the department is in the process of doing. As mentioned, the DHS CIO Council established IT human capital as one of its eight priority areas. As with the other priority areas, a component agency sponsor has been named for human capital. However, unlike the other priority areas, as of mid-July 2004, an Office of the CIO official had not been assigned to work in this area. An Office of the CIO official explained that the person originally assigned this task is no longer with the department and that the office was determining who would take over this role. Moreover, in February 2003, the DHS CIO set July 2003 as a milestone for developing a current inventory of IT skills, resources, and positions, and September 2003 as the target date for developing an action plan. In mid-July 2004, the CIO stated that these milestones were not met and acknowledged that progress in IT human capital has been slow. He stated that he still plans to complete an inventory and action plan but could not provide an estimated completion date. We have issued a large body of human capital work that could assist in this undertaking. For example, while agencies’ approaches to workforce planning will vary, our guide on strategic workforce planning lays out five key principles that such a process should address irrespective of the context in which planning is done. These are as follows: Involve top management, employees, and other stakeholders in developing, communicating, and implementing the strategic workforce plan. Determine the critical skills and competencies that will be needed to achieve current and future programmatic results. Develop strategies that are tailored to address gaps in number, deployment, and alignment of human capital approaches for enabling and sustaining the contributions of all critical skills and competencies. Build the capability needed to address administrative, educational, and other requirements important to support workforce strategies. Monitor and evaluate the agency’s progress toward its human capital goals and the contribution that human capital results have made toward achieving programmatic goals. DHS faces the formidable challenge of defining and implementing an effective information and technology management structure at the same time that it is developing and acquiring major IT systems that are critical to meeting its mission needs. Although DHS has made progress in addressing this challenge, it does not yet have a fully institutionalized structure in place, which puts its pursuit of new and enhanced IT investments at risk of not optimally supporting corporate mission needs and not meeting cost, schedule, capability, and benefit commitments. In particular, still lacking in the department’s IT strategic planning process—which is critical because it defines what an agency seeks to accomplish and how that will be achieved—are goals, performance measures, and milestones for significant activities and whether DHS has appropriately skilled and deployed IT staff. The department’s CIO and DHS CIO Council—which is responsible for establishing a strategic plan and setting priorities for departmentwide IT— are organizationally placed to improve this planning process and to consider the needs of DHS as a whole. With regard to the other six elements of an effective information and technology management structure, DHS can be guided by the many recommendations that we and the Office of Inspector General have already made to the CIO and other responsible entities, along with our best practices guidance, as it uses technology to help better secure the homeland. To strengthen DHS’s IT strategic planning process, we recommend that the Secretary of Homeland Security direct the CIO, in conjunction with the DHS CIO Council, to take the following three actions: Establish IT goals and performance measures that, at a minimum, address how information and technology management contributes to program productivity, the efficiency and effectiveness of agency operations, and service to the public. Establish milestones for the initiation and completion of major information and technology management activities. Analyze whether DHS has appropriately deployed IT staff with the relevant skills to obtain its target IT structure and, if it does, whether they are allocated appropriately. In written comments on a draft of our report signed by the Director, Departmental GAO/OIG Liaison within the Office of the Chief Financial Officer, DHS generally concurred with our recommendations. DHS also offered specific comments related to these recommendations, including: Regarding our recommendation that DHS establish IT goals and performance measures, the department emphasized that it is developing road maps for its eight priority areas that, over the next few months, will include developing goals, performance measures, and time lines for implementation. We believe that DHS’s plans are consistent with our recommendation. On our recommendation to establish milestones for the initiation and completion of major information and technology management activities, DHS stated that its interpretation was that the recommendation pertained to having an established IT investment management structure and centered its comments on its plans related to two of its priorities— enterprise architecture and portfolio management. We agree that these two areas are covered by our recommendation. However, our recommendation is broader than just these two areas, instead covering any information and technology management activity identified as significant through DHS’s IT strategic planning processes (e.g., the development of milestones related to activities associated with each of DHS’s IT priorities). With respect to our recommendation on IT staffing, DHS stated that on July 30, 2004, the CIO approved funding for an IT human capital center of excellence. This center is tasked with delivering plans, processes, and procedures to execute an IT human capital strategy and to conduct an analysis of the skill sets of DHS IT professionals. DHS’s stated action represents a first step toward accomplishing these activities. DHS also provided specific comments on our characterization of the department’s progress related to its IT investment management process. The department described its IT investment governance boards and processes and stated that it believed that its IT investment management process has matured and that IT investments are subject to a rigorous corporate review. While our report acknowledges that DHS had changed its IT investment management process to reflect lessons learned and continuous improvement of the process, we believe that our characterization of this process as still maturing is appropriate. For example, the directive that instructs DHS component entities on which investments need to be approved and by what governance board does not reflect the current process. Regarding DHS’s comment that its IT investments are subject to a rigorous corporate review, as we reported, DHS has not established a process to ensure that control reviews of IT investments are performed in a timely manner and many of DHS’s IT investments have not undergone such reviews. Lastly, DHS provided technical comments, which we addressed in the report as appropriate. DHS’s written comments, along with our responses, are reproduced in appendix II. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Homeland Security and the Director, Office of Management and Budget. Copies will also be available at no charge on GAO’s Web site at www.gao.gov. If you have any questions on matters discussed in this report, please contact Randy Hite at (202) 512-3439 or via e-mail at hiter@gao.gov. Other key contributors to this report were Season Dietrich, Tamra Goldstein, and Linda Lambert. Chaired by Deputy Secretary Members include under secretaries and other department executives, including the Chief Information Officer (CIO) The following are GAO’s comments on the Department of Homeland Security’s (DHS) letter dated August 12, 2004. 1. Although the IRM strategic plan is not labeled draft, we changed our characterization of the plan in the report based on the DHS comments. 2. As discussed in the report, these road maps are draft and incomplete (e.g., they do not include fully defined goals and performance measures). 3. The Joint Requirements Council’s charter does not list the CIO as a member of this council; instead the chief technology officer is the Office of the CIO’s representative on the council, which is reflected in our report. 4. We believe that our characterization of DHS’s IT investment management process as still maturing is appropriate. For example, the May 2003 directive that instructs DHS component entities on which investments need to be approved and by what governance board does not reflect the current process, and more recent DHS documentation related to the process provides inconsistent information. 5. We disagree because, as we stated in the report, DHS has not established a process to ensure that control reviews of IT investments are performed in a timely manner, and many of DHS’s IT investments have not undergone such reviews. 6. We added information about the DHS tool to the report. 7. The DHS quote does not include our attribution in the report that the assessment of the information security program areas is the department’s own representation. We did not evaluate the information security program strategic plan. 8. We do not agree that these statements are conflicting. The management of the department’s plans of action and milestones is just one of many planned actions discussed in the information security program strategic plan. 9. As stated in the report, we agree that human capital management is a key to the success of the department and that the challenges that the federal government faces in maintaining a high-quality IT workforce are long-standing and widely recognized. It is because of these views that we are concerned that the department did not meet the CIO’s goal of having a current inventory of IT skills by July 2003 and an action plan by September 2003. Nevertheless, DHS’s stated action represents a first step toward accomplishing these activities. 10. Our report dealt with enterprise-level performance measures, not project-specific measures as required by the exhibit 300s. With respect to DHS’s plans for each of the priority areas, we believe this is consistent with our recommendation. 11. We agree that the two priority areas discussed in the DHS letter are covered by our recommendation. However, our recommendation is broader than just these two areas. Specifically, our recommendation covers any information and technology management activity identified as significant through DHS’s IT strategic planning processes (e.g., the development of milestones related to activities associated with each of DHS’s IT priorities). Homeland Security: Efforts Under Way to Develop Enterprise Architecture, but Much Work Remains. GAO-04-777. Washington, D.C.: Aug. 6, 2004. Homeland Security: Performance of Information System to Monitor Foreign Students and Exchange Visitors Has Improved, but Issues Remain. GAO-04-690. Washington, D.C.: June 18, 2004. Human Capital: DHS Faces Challenges In Implementing Its New Personnel System. GAO-04-790. Washington, D.C.: June 18, 2004. Information Technology: Homeland Security Should Better Balance Need for System Integration Strategy with Spending for New and Enhanced Systems. GAO-04-509. Washington, D.C.: May 21, 2004. Information Technology: Early Releases of Customs Trade System Operating, but Pattern of Cost and Schedule Problems Needs to Be Addressed. GAO-04-719. Washington, D.C.: May 14, 2004. Homeland Security: First Phase of Visitor and Immigration Status Program Operating, but Improvements Needed. GAO-04-586. Washington, D.C.: May 11, 2004. Additional Posthearing Questions Related to Proposed Department of Homeland Security (DHS) Human Capital Regulations. GAO-04-617R. Washington, D.C.: April 30, 2004. Project SAFECOM: Key Cross-Agency Emergency Communications Effort Requires Stronger Collaboration. GAO-04-494. Washington, D.C.: April 16, 2004. Posthearing Questions Related to Proposed Department of Homeland Security (DHS) Human Capital Regulations. GAO-04-570R. Washington, D.C.: March 22, 2004. Human Capital: Preliminary Observations on Proposed DHS Human Capital Regulations. GAO-04-479T. Washington, D.C.: February 25, 2004. Aviation Security: Computer-Assisted Passenger Prescreening System Faces Significant Implementation Challenges. GAO-04-385. Washington, D.C.: February 12, 2004. Information Technology: OMB and Department of Homeland Security Investment Reviews. GAO-04-323. Washington, D.C.: February 10, 2004. Coast Guard: New Communication System to Support Search and Rescue Faces Challenges. GAO-03-1111. Washington, D.C.: September 30, 2003. Human Capital: DHS Personnel System Design Effort Provides for Collaboration and Employee Participation. GAO-03-1099. Washington, D.C.: September 30, 2003. Homeland Security: Risks Facing Key Border and Transportation Security Program Need to Be Addressed. GAO-03-1083. Washington, D.C.: September 19, 2003. Information Technology: Homeland Security Needs to Improve Entry Exit System Expenditure Planning. GAO-03-563. Washington, D.C.: June 9, 2003. Homeland Security: Information Sharing Responsibilities, Challenges, and Key Management Issues. GAO-03-715T. Washington, D.C.: May 8, 2003. Customs Service Modernization: Automated Commercial Environment Progressing, but Further Acquisition Management Improvements Needed. GAO-03-406. Washington, D.C.: February 28, 2003. Major Management Challenges and Program Risks: Department of Homeland Security. GAO-03-102. Washington, D.C.: January 2003. Homeland Security: Information Technology Funding and Associated Management Issues. GAO-03-250. Washington, D.C.: December 13, 2002. National Preparedness: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy. GAO- 02-811T. Washington, D.C.: June 7, 2002. Customs Service Modernization: Management Improvements Needed on High-Risk Automated Commercial Environment Project. GAO-02-545. Washington, D.C.: May 13, 2002. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
In 2003 GAO designated the merger of 22 separate federal entities into the Department of Homeland Security (DHS) as a high risk area because of the criticality of the department's mission and the enormous transformation challenges that the department faced. Given that the effective use of information technology (IT) is a critical enabler of this merger, GAO has previously reported on a number of DHS efforts aimed at institutionalizing an effective information and technology governance structure and investing in new IT systems that are intended to better support mission operations. Now that DHS has been operating for over a year, GAO was asked to, based largely on its prior work, describe DHS's progress in meeting its information and technology management challenge. DHS's overall IT challenge is to standardize and integrate the legacy system environments and management approaches that it inherited from its predecessor agencies, while concurrently attempting to ensure that present levels of IT support for critical homeland security operations are not only maintained but improved in the near term. To accomplish this, the department is in the process of instituting seven information and technology management disciplines that are key elements of an effective information and technology management structure. DHS's progress in institutionalizing these key information and technology management elements has been mixed, and overall remains a work in progress. Such progress is not unexpected, given the diversity of the inherited agencies and the size and complexity of the department's mission operations. Nevertheless, because DHS has not yet fully institutionalized these governance elements, its pursuit of new and enhanced IT investments are at risk of not optimally supporting corporate mission needs and not meeting cost, schedule, capability, and benefit commitments. Accordingly, GAO has previously made recommendations relative to most of these areas to the department's chief information officer and other responsible DHS entities. Lastly, DHS has developed a draft IT strategic plan, which GAO finds lacking in explicit goals, performance measures, milestones, and knowledge of whether it has properly positioned IT staff with the right skills to accomplish these things.
Over time, the Congress has established about 80 separate programs to provide cash and noncash assistance to low-income individuals and families. Means-tested programs are restricted to families or individuals who meet specified financial requirements and certain other eligibility criteria established for each program. The financial requirements restrict eligibility to families and individuals whose income falls below defined levels, and in some cases, whose assets—such as bank accounts and the value of automobiles—also fall below defined levels. Nonfinancial requirements restrict eligibility to specified categories of beneficiaries, such as pregnant women, children, or individuals with disabilities. Federal, state, and local governments expended a combined total of nearly $400 billion on the approximately 80 means-tested programs in fiscal year 1998. Medicaid accounted for 45 percent of the expenditures. Twenty- seven of the 80 programs, representing 97 percent of the total expenditures, had expenditures of over a billion dollars each (see table 1). Means-tested programs provide assistance in eight areas of need: (1) cash assistance; (2) medical benefits; (3) food and nutrition; (4) housing; (5) education; (6) other services, such as child care; (7) jobs and training; and (8) energy aid. Ten of the 11 programs on which our review focuses accounted for 74 percent of the total expenditures for means-tested federal programs in fiscal year 1998 (see table 2). Table 3 provides an overview of the populations targeted by these programs and the types of assistance that they provide. The 11 means-tested programs that we included in our review were enacted over time to serve various populations and achieve various objectives. For example, in 1937, the Public Housing program was created to provide adequate temporary shelter to families who could not afford housing; Medicaid, in 1965, to provide medical assistance for low-income families with children and aged, blind, and disabled individuals; and SCHIP, in 1997, to provide health insurance coverage to uninsured low- income children from families who do not qualify for Medicaid. In some cases, the unique financial rules that apply to a particular program may be related to the purpose of that program and reflect its goals or objectives. For other programs, this may not be the case and the differences in eligibility standards across programs may stem from decisions made at different times by different congressional committees or federal agencies. In addition to offering a wide variety of benefits and services, means- tested programs vary in the extent to which they guarantee that funds for services will be available. For some programs such as Food Stamps and SSI, federal funds are available to provide benefits to all eligible applicants. Other programs such as TANF and SCHIP have a fixed amount of federal funds available. Moreover, some of the programs require state and other nonfederal matching money (e.g., Medicaid and SCHIP), while others are fully funded with federal dollars (e.g., LIHEAP and WIC). An individual low-income family is likely to be eligible for and participate in several means-tested programs. For example, as shown in figure 1, families receiving TANF generally also receive Medicaid, food stamps, and school meals. Smaller percentages of these families receive assisted housing, WIC, and LIHEAP. The need for welfare simplification has been voiced recurrently over a period of many years. While this concept covers a broad range of potential objectives, a key aspect has been the need to simplify financial eligibility rules. Means-tested programs have been established over time to meet the needs of various target populations. However, policy experts and researchers have concluded that the complexity and variations in programs’ financial eligibility rules have had unanticipated but detrimental consequences for both program administration and family access to assistance. On the administrative side, they have argued that the financial eligibility rules have increased substantially the staff resources needed to determine eligibility and benefit levels, and thereby increased the costs of administering programs. With regard to families’ access to programs, they maintained that the rules have often resulted in confusing families about their eligibility for programs and contributed to the creation of a service delivery system with many separate entry points that is often difficult and burdensome for families to navigate. Numerous studies and reports since the late 1960s have called for the overhaul or repair of the nation’s assistance programs that serve low- income families and individuals. For example, a Presidential committee recommended in 1977 that a total effort to reform welfare was needed because of the inequities and administrative “chaos” created by a plethora of inconsistent and confusing programs. During the1980s, we issued several reports on welfare simplification. One of these reports surveyed the states to identify what they viewed as the major obstacles to their efforts to achieve service integration. Of the 25 obstacles identified, the one cited most frequently (42 states) was that different programs use different financial eligibility requirements. In 1991, the National Commission for Employment Policy recommended that agencies administering public assistance programs should develop a common framework for streamlining eligibility requirements, formulating standard definitions, and easing administrative and documentation requirements. In 1990, the Congress authorized the creation of the Welfare Simplification and Coordination Advisory Committee to examine four major assistance programs: Food Stamps, Aid to Families with Dependent Children, Medicaid, and housing assistance programs. The Congress mandated the committee to identify barriers to participation in assistance programs and the reasons for those barriers. In June 1993, the committee recommended that the numerous programs that currently serve needy families be replaced with a single family-focused, client-oriented, comprehensive program. Recognizing that it would take time to implement its primary recommendation, the Commission made 14 interim recommendations to the Congress, including the following: Form a work group of the chairs of the relevant congressional committees to ensure that all legislative and oversight activities involving public assistance programs are coordinated. Establish uniform rules and definitions to be used by all needs-based programs in making their eligibility determinations. Streamline the verification process. Permit the sharing of client information among agencies to streamline eligibility determination processes and reduce duplication of related activities. In 1995, the Institute for Educational Leadership, based on its examination of the executive and legislative structures that federal means-tested programs are built upon, urged the administration to create a Family Council. One of the stated goals of such a council was to have been proposing changes to eligibility requirements, definitions, financing and administrative requirements, data collection and reporting requirements, and performance standards that were inconsistent, incoherent, and confusing. Moreover, in a 1995 report to the Congress, we concluded, in part, that the inefficient welfare system is increasingly cumbersome for program administrators to manage and difficult for eligible clients to access. Just as the need for simplification of financial eligibility rules has been acknowledged, there has also been a general recognition that achieving substantial improvements in this area is exceptionally difficult. For example, implementing systematic changes to the federal rules for human service programs can be challenging because jurisdiction for these programs is spread among numerous congressional committees and federal agencies. Substantial variations exist in the financial eligibility rules across selected means-tested federal programs. The primary sources of these variations are generally at the federal level, although for several programs such as TANF and Medicaid states and localities have some flexibility in setting financial eligibility rules. Variations exist among the programs in the financial rules regarding the types and amounts of income limits. Differences also exist among these programs with regard to whose income is counted, what income is counted or excluded, and whether certain expenses—such as child care costs—are deducted in calculating income. In addition to income tests, programs impose different limits on the assets that an individual or family may hold in order to receive benefits. Asset tests are further complicated because of the differences in how the equity in vehicles is treated when determining assets. The first and most basic difference among programs is the variation in type of income limits used for determining program eligibility. Income limits for most of the 11 programs reviewed used a percentage of the federal poverty guideline or an area’s median income. For example, the School Meals program uses a percentage of the poverty guideline to set benefit eligibility while the housing programs use percentage of area median income to determine eligibility. The programs not only differed in the type of income limit but also in the actual level of income. For example, the maximum allowable gross monthly income for food stamps for a family of three is $1,585 nationwide, whereas, the maximum allowable gross monthly income for subsidized child care—which is based on state median income—is $4,494 in the state of Connecticut (the state with the highest median income). For all 11 programs except TANF, federal laws and regulations have set some income limit. The most common type of income limit used among these programs is some percentage multiple of the federal poverty guideline, updated annually in the Federal Register by the Department of Health and Human Services (HHS). However, the percentage of the guideline used varies among programs. (See table 4 for a comparison of the type of limits used among the 11 programs.) Programs also vary in setting the income limits that are used to determine eligibility. While some of the programs provide states with options in setting income limits others do not. For example, LIHEAP and WIC provide states the option of choosing between two types of income limits. In the case of TANF, states are given full discretion in how they establish eligibility, including choosing both the type and level for their income limit. For Medicaid, while the federal government requires that states provide Medicaid to individuals who fall into certain categories and whose income and resources fall below certain limits, states may, in some circumstances, set more generous income limits and create different categories so that additional individuals may receive coverage. In addition, in some instances, states are given options to set income limits by the federal statute or regulation. For example, while the law sets the maximum income limit for child care funds at 85 percent of a state’s median income, several states have set their limits far below the allowable federal limit. Whose income is counted and whether any exclusions or deductions are made can affect a family’s income eligibility for the different programs. In general, the programs varied in whose income is counted in determining eligibility. There is no single definition of “family” or “household” used by means-tested federal programs. Federal rules generally govern whose income should be used to determine eligibility. In some programs, the definition of the household unit reflects the program’s service focus, and in these instances the income of people with whom the applicant shares certain expenses are included in the calculation. The LIHEAP program, for example, defines household as members purchasing energy together. Similarly, the Food Stamp statute identifies the household as the income unit and defines a household as people who purchase food and prepare meals together. Certain programs provide states with some discretion in defining a family. For example, the SCHIP regulation identifies the family as the income unit but allows the states to decide how that should be defined. Regulations for the Low-rent Public Housing and Housing Choice Voucher programs set forth some examples of families but allow public housing agencies to determine if any other group of persons qualify as an eligible family. Table 5 summarizes household unit definitions for each of the 11 programs. Programs also differ in how they treat earned income for the purposes of eligibility determination. Those programs that emphasize a transition to economic self-sufficiency sometimes treat earned income favorably for program eligibility purposes to provide an incentive for clients to continue to work. In TANF, for example, almost all states disregard some income; that is, they allow TANF recipients to earn a given amount of their earned income either as a percentage of earnings (between 20 and 50 percent), or a set dollar amount (between $90 and $250) or both, without any reduction in their benefits. In Medicaid, while some states have the same disregards used in TANF, other states have more generous disregards. See table 6 for the earned income disregards used by various programs. In calculating applicants’ income levels to determine eligibility, some programs also have provisions to deduct certain types of expenses. These deductions include allowances for certain medical, shelter, or child care expenses of applicants. In other programs, no deductions or exclusions may apply. Some states have the same child care deductions in their TANF and Medicaid programs. Housing Choice Voucher and Low-rent Public Housing programs share many but not all of the same rules and regulations. Both programs have a child care deduction for children under 13 and an adult dependent care deduction for expenses over 3 percent of a family’s income. Table 7 illustrates programs’ different handling of payments for child care as a deduction from income. While several programs have specific rules regarding assets and set limits on the amount of certain assets that clients can hold, most programs have no restrictions on assets at all. Assets are generally defined to include cash held in checking and savings accounts, individual retirement accounts, 401Ks, and other accounts that can be readily transferred into cash. Federal rules and regulations set assets limits for several programs, but states do have discretion in certain cases. Vehicle asset rules exist in some of the 11 programs and these rules vary, not only across programs, but across states as well. In some programs, a vehicle used to access work may be disregarded; in other programs, a certain portion of the value of the vehicle may be disregarded. For example, in the SSI program, the first $4,500 in current market value is excluded. If it is used for employment or daily activities, used to obtain medical treatment, or has been modified for use by or for transportation of a handicapped person, the vehicle’s value is completely excluded. The vehicle asset test for food stamps is set at $4,650. However, a recent change allows states to apply their TANF vehicle asset test for food stamp eligibility and benefit determination, as long it is at least as generous as the Food Stamp rule. For TANF, many states exclude the entire value of one vehicle; one state excludes the value of all vehicles, and one state has no asset test at all. In states that impose a vehicle asset test for TANF, three states (Louisiana, Oregon, and Wisconsin) allow up to $10,000 in equity value and one state (Wyoming) disregards up to $12,000 in trade-in value. Table 8 displays the general assets limits as well as the vehicle asset rules, if any. Variations in financial eligibility rules and the multiplicity of agencies that administer programs at the state and local level have contributed to the formation of administrative processes that involve substantial complexity and duplication of staff efforts. In spite of the variations in financial eligibility rules, the states we reviewed have established joint eligibility determination processes for certain programs. While the processes for determining eligibility were coordinated for selected programs, state and local staff reported that the variations and complexities of certain financial rules in these programs created considerable difficulties in determining eligibility and calculating benefit levels. With regard to the other programs in these states, eligibility is determined separately for each program. As a result, applicants must visit multiple offices and repeatedly provide much of the same information to apply for assistance from these other programs. While data generally are not available on the specific costs of determining eligibility and calculating benefit levels for the 11 programs we reviewed, evidence suggests that these costs are substantial. In all five states we visited, joint application processes have been established for some programs, ranging from three programs in Kentucky to six programs in Nebraska. These processes enable an applicant to complete a single application for multiple programs. A single caseworker can determine for which programs the client is eligible and then calculate benefit amounts. The caseworker uses one or more automated systems to perform these tasks and generally needs to input application information only once into the automated systems. As shown in table 9, all five states have joint eligibility determination processes for TANF, Food Stamps, and Medicaid. In Nebraska, applicants can complete a joint application for these three programs and Child Care, SCHIP, and LIHEAP. (How these states have used computer systems to establish joint application processes is discussed later in the report.) Even though the determination of eligibility in these programs has been coordinated, state and local officials told us that variations in these programs’ financial eligibility rules, as well as the sheer complexity of the rules in certain programs, create substantial difficulties or added work for caseworkers in determining eligibility and benefit levels. With regard to variations in rules, the aspects most commonly cited as troublesome for caseworkers include differences in rules about household units, income limits, countable and excludable income, and asset limits. For example, differences in the definition of a household unit affect eligibility decisions because family members are treated differently across programs. In the Food Stamp program, a household generally consists of all the persons who purchase food and prepare meals together. In TANF, the family is the household unit (which states define) but generally includes only dependent children, their siblings, and the parents or other caretaker relatives. Consequently, a family member may be a part of a household in one program, treated as a separate family in another program, and ineligible for benefits in another program. If caseworkers do not establish the correct household for a program, errors in eligibility or benefit levels can result. State and local officials believed that establishing a uniform definition of household unit would reduce both the work required of caseworkers and the possibility of errors. The problems encountered by caseworkers were attributed primarily to the complexity of the financial eligibility rules for certain programs, especially Food Stamps and Medicaid. State and local officials identified the following areas as especially difficult and error-prone in the Food Stamp program: (1) determining household composition, (2) determining whether the value of a household’s assets is less than the maximum allowable, and (3) calculating the amount of a household’s earned and unearned income and deductible expenses. For example, with regard to the last of these areas, Food Stamp rules require that net monthly income be calculated by allowing up to six possible deductions from gross monthly income. The six allowable deductions are a standard deduction, an earned income deduction, a dependent care deduction, a medical deduction, a child support deduction, and an excess shelter cost deduction. Errors in calculating any one of these complicated deductions has resulted in inaccurate eligibility determinations or food stamp benefit levels. Such errors can lead to overpayments or underpayments to clients, and delays in processing of applications and disbursement of benefits. Moreover, states with high error rates can receive federal sanctions or be required to take steps to improve program administration. Our prior work identified the complexity of Food Stamp eligibility rules as a problem and recommended that USDA develop and analyze options for simplifying the rules for determining eligibility and benefit levels. State officials also pointed to various complexities associated with determining eligibility for Medicaid. Unlike the TANF and Food Stamp programs, Medicaid eligibility encompasses many categories of individuals. Among the states we visited, the number of eligibility categories varied from approximately 30 in Nebraska to about 100 in California. The rules and methodologies used to determine eligibility vary for many of these categories. Medicaid eligibility rules often include different income thresholds for children of different ages in the same family, and different rules for determining the eligibility of parents. Consequently, multiple tests may be used in determining eligibility for each member of a family, resulting in different outcomes for members of the same family. State and local officials told us that because of the complex financial rules in Medicaid, caseworkers are often frustrated; it is also more difficult for caseworkers to learn their jobs and perform them well. While joint eligibility processes have been established for some programs in the states we reviewed, eligibility for other programs is generally determined separately. For example, as shown in table 10, public housing agencies administer housing programs and SSA administers SSI in each state. In addition, in general, health departments determine eligibility for WIC and SCHIP; school districts administer School Meals; and community- based organizations administer LIHEAP. In some instances, caseworkers from different programs have been co- located at one location such as a one-stop center, but eligibility for these programs continues to be determined separately. For example, in San Mateo County, California, caseworkers for the Human Services Agency determine eligibility for the Food Stamp, Medicaid, TANF, Child Care, Low-rent Public Housing, and the Housing Choice Voucher programs. While one caseworker can assist clients in applying for TANF, Medicaid, and Food Stamps, these clients must meet separately with different caseworkers to apply for any of the other programs. The separate eligibility processes in the states we reviewed involve a substantial duplication of administrative functions and impose demands on the time and resources of applicants. For example, a family in these states that wanted to apply for all 11 programs would need to complete anywhere from 6 to 8 applications and visit up to six offices. These applications require applicants to repeatedly provide much of the same information. Our analysis of the application forms in Utah showed that at least 90 percent of the information collected by the applications for each of the following programs—SCHIP, LIHEAP, WIC, and School Meals—was collected on the joint application for TANF, Food Stamps, Medicaid, and Child Care. In fact, no new information was obtained on the SCHIP and LIHEAP applications. These separate applications generally ask for similar information collected on the joint application, such as household composition, employment status, and earned and unearned income. The annual costs to the federal government for administering means- tested programs are significant and eligibility determination activities make up a substantial portion of these costs. The federal government provides funds to states and localities for administering most of the means-tested programs and the percentage of the administrative costs borne by the federal government varies by program. The programs vary in the types of activities included in the administrative cost category. For example, in some cases these activities include outreach to potential program participants and service providers, preparation of program plans and budgets, travel, and quality assurance. As shown in table 11, in fiscal year 1998, the estimated federal costs for program administration in the 11 programs totaled over $12.4 billion. This constitutes about 4 percent of total expenditures for benefits in these programs. Federal agencies generally do not require states to report the costs for specific activities related to eligibility determinations. While data are not generally available on the specific costs of determining eligibility and calculating benefit levels for all of the 11 programs we reviewed, evidence suggests that these costs are substantial. In the Food Stamp program, for example, federal costs for eligibility determinations are in excess of $1 billion annually and account for over half of overall administrative costs. Moreover, while the states we visited did not routinely collect data on the costs associated with determining eligibility, we obtained some information on these costs for certain programs in California. For one calendar quarter—the fourth-quarter of 2000—California was able to provide data on expenditures for eligibility determination activities: $183 million in staff costs for Medicaid eligibility determinations, $106 million for food stamps, and $71 million for TANF, according to state officials. These figures include both federal and state costs. Overall, federal, state, and local entities have made limited progress in simplifying or coordinating eligibility determination processes. Several of the states we visited realigned some of the financial rules, yet this approach has been used to a limited extent. Another approach is to take advantage of the capabilities of computer systems. The state and localities we reviewed used computer systems both to establish joint eligibility determination processes for some programs and in a few cases to share data across agencies to coordinate eligibility determination processes. However, state and local officials in all five states said that much more should be done to simplify the financial eligibility rules and eligibility determination processes across programs but cited various obstacles to achieving further progress. In some cases, states have used the flexibility allowed under federal law to simplify or realign their financial eligibility rules. This has occurred in at least three ways. First, some states have used options established in federal law to extend eligibility automatically for one program based on an applicant’s participation in another means-tested program—a provision referred to as “categorical eligibility.” Second, at least one state has attempted to use a federally established option to create a Simplified Food Stamp program that aligns the financial eligibility rules for Food Stamps and TANF. Third, the states we visited have used the flexibility allowed under TANF to change provisions of their TANF financial eligibility rules to realign them with those of other programs. Provisions allowing categorical eligibility have been implemented by states in several programs. For example, the 1972 amendments to the Social Security Act gave states the authority to make SSI recipients automatically eligible for Medicaid. States that use this authority pay SSA to incorporate Medicaid-required questions in the SSI application process and establish an automated linkage between the SSI and Medicaid programs. As a result, clients who are approved for SSI are automatically enrolled in Medicaid and are not required to apply for Medicaid benefits. As of February 2001, 32 states—including three states we visited (California, Delaware, and Kentucky)—and the District of Columbia have linked their Medicaid programs with SSI. Federal law also gives states the option of establishing categorical eligibility to LIHEAP applicants who are receiving SSI, TANF, or Food Stamps. However, according to one agency official, while one of the states we visited (Nebraska) uses this option, most states do not. Many of the potential beneficiaries of the LIHEAP program are elderly or others who are not using public assistance programs. To avoid the perception that LIHEAP is a public assistance program, states are required to offer LIHEAP services through an alternative approach; most of the states we visited used community-based organizations to administer the program. School districts may also use direct certification to enroll school-aged children into the School Meals program. Direct certification is a method of eligibility determination that does not require families to complete school meals applications. Instead, school officials use documentation obtained directly from the local or state human services agency that indicates that a household participates in TANF or Food Stamps as the basis for certifying students for free school meals. While all of the states we visited used direct certification as a means to identify and enroll children in the School Meals program, not all school districts or schools within the states used the process. According to a recent USDA study, approximately 35 percent of students approved for free meals are certified through direct certification. The Simplified Food Stamp Program, an option created by the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA), was another effort to streamline program administration. The simplified program option was to be a vehicle for creating conformity between TANF and the Food Stamp program by merging the programs’ rules into a single set of requirements for individuals receiving both types of assistance. Specifically, the program allows states to establish eligibility and benefit levels on the basis of household size and income, work requirements, and other criteria established under TANF, food stamps, or a combination of both programs—as long as federal costs are not increased in doing so. As of February 2001, while several states had used some features of the Simplified Food Stamp program, only one state had attempted to implement a more extensive version of the program. In our January 1999 report, we found that the two most frequent reasons given by states for not implementing the simplified program were as follows: (1) it would result in increased caseworker burden and (2) the cost neutrality provision restricted the states’ options for simplifying the program. States have also sought to realign their financial eligibility rules by taking advantage of their flexibility under TANF. For example, Nebraska changed its TANF (1) assets limits to mirror those for Medicaid, (2) earned income disregards to mirror those for Food Stamps, and (3) client reporting requirements to mirror those for Food Stamps. A Nebraska state official told us that these changes resulted in simplifying the financial rules to ease eligibility determination processes for caseworkers and reduce complexities for clients being served. Delaware broadened eligibility for food stamps by creating categorical eligibility for food stamps through the TANF program. During the application process, clients are asked if they are interested in two specific TANF program components, pregnancy prevention and family planning services. Some clients who may have been determined financially ineligible for food stamps, but indicated an interest in either TANF service, received categorical eligibility for food stamps. However, in the near future, states will not have the authority to more broadly confer categorical eligibility to TANF clients. With recent changes in Food Stamp regulations, effective September 30, 2001, states will be restricted to conferring categorical eligibility to TANF clients with incomes at 200 percent of the federal poverty level or below. States have considerable flexibility to streamline eligibility processes in their Medicaid for children and SCHIP programs. According to a recent survey, many states have taken steps to streamline and simplify their child health coverage programs. These activities have been driven, to a large extent, by the emphasis on designing easy, family-friendly application systems for new SCHIP programs, coupled with the federal requirement to coordinate these new programs with Medicaid. The survey found that most states have taken steps to simplify the application process for child health coverage. For example, of the 32 states that implemented separate SCHIP programs, 28 states use joint applications for Medicaid and SCHIP. Moreover, 39 states and the District of Columbia have eliminated face-to-face interviews and 10 states allow self-declaration of income in both their Medicaid for children and SCHIP programs. In addition, most states have made efforts to expand income-eligibility for children and simplify eligibility rules. For example, between November 1998 and July 2000, the number of states that covered children under age 19 in families with income at or below 200 percent of FPL increased from 22 to 36. Finally, 41 states and the District of Columbia have dropped the asset test in both their Medicaid for children and SCHIP programs. States are increasingly relying on computer systems to establish joint processes for determining eligibility or to share data across agencies to facilitate the verification of data needed to determine client eligibility. However, in some cases states have encountered difficulties in expanding joint eligibility processes due to factors such as limitations in the abilities of caseworkers to master the eligibility rules for so many programs. The federal government has played a key role in facilitating the automation of means-tested programs. Three of the federal government’s major programs for needy families—TANF, Food Stamps, and Medicaid— have historically relied on state-run automated systems to help determine applicants’ eligibility and the amount of assistance each client should receive. In the past, the Congress authorized several agencies to reimburse states for a significant proportion of their total costs to develop and operate automated eligibility determination systems for these programs. For example, in 1980, the Congress authorized USDA’s Food and Nutrition Service, which oversees the Food Stamp program, to reimburse states for 75 percent of their costs for planning, designing, developing, and installing automated eligibility systems and 50 percent of the costs to operate these systems. To obtain enhanced funding for AFDC automated systems, states had to meet the requirements for a Family Assistance Management Information System (FAMIS), a general system design developed by HHS to improve state administration of the AFDC program. Because eligibility for Medicaid and Food Stamps was linked to eligibility for AFDC, most of the AFDC systems also covered Medicaid and Food Stamps. While the federal government generally no longer provides for enhanced levels of matching funds for systems development, the federal government continues to be a major funder of new computer systems for human services. For example, Texas has budgeted more than $289 million over a 6-year period to develop a new automated system for its human services department that would support the determination of eligibility for approximately 50 programs. The federal share (obtained from HHS and USDA) is projected to be about 51 percent of the total amount. Some of the states we reviewed have developed computer systems that have enabled them to expand the number of programs for which eligibility can be jointly determined. For example, Nebraska developed the Nebraska-Family On-Line Client User System (N-FOCUS), which contains the eligibility rules for TANF, Food Stamps, Medicaid, SCHIP, and Child Care. A separate computer system is used to determine eligibility for LIHEAP. These computer systems enable a single worker to jointly determine eligibility and calculate benefit levels for all of these programs. However, since these computer systems are not completely interfaced, caseworkers must sometimes enter client information more than once. In Delaware, caseworkers use the Delaware Client Information System II (DCIS II) to determine eligibility and benefit levels for TANF, Food Stamps, Medicaid, and SCHIP. Caseworkers use a separate computer system to determine eligibility and benefit levels for Child Care. These computer systems enable a single caseworker to determine eligibility jointly for five programs. In contrast to Nebraska, the different computer systems in Delaware are interfaced, which allows caseworkers to switch between systems and transfer data from one system to another, thereby eliminating the need to re-enter the same information in multiple systems. While their computer systems have resulted in streamlining the eligibility determination processes for clients, no data were available to determine whether these initiatives had generated any administrative cost savings. In addition to supporting joint eligibility determination processes, computer systems are being used to share client data across certain agencies to obtain information needed for determining eligibility. For example, when families in Delaware apply for TANF cash assistance, they are informed on their applications that the state department of health and social services may contact other persons or organizations to obtain the proof necessary in determining eligibility and benefit levels. The department of health and social services has automated links to share client information with other state agencies, including the department of Labor, the Divisions of Public Health and Motor Vehicles, and the child support enforcement agency. While computer systems can facilitate efforts to coordinate eligibility determination processes, states encountered limitations in system capabilities. For example, Nebraska officials told us that because of the variations in programs and financial rules, “workarounds” had been developed to help caseworkers overcome some systems-related problems. Workarounds are instructions to staff for specific situations in which a worker has to intervene manually in the eligibility determination process. While Nebraska’s N-FOCUS system provided automated support for 26 programs and the policies and rules built into the system to support all these programs, slow processing times had resulted. In addition, caseworkers were frustrated because the system was inflexible and did not cover all possible client household situations, which sometimes resulted in inaccurate eligibility determinations. Later, when the N-FOCUS automated system was modified by reducing technical complexities, it resulted in quicker processing times of client data, more flexibility for caseworkers in using the automated system, and greater responsibilities for caseworkers to know their programs. Caseworkers told us that the changes were helpful improvements. Nonetheless, some caseworkers expressed concern that program complexities, high caseloads, and time constraints made it difficult to learn the eligibility rules with their varying criteria and financial rules. Through discussions with federal, state, and local officials, and a review of literature in the area, we identified a number of obstacles that hinder efforts to make further progress in streamlining or coordinating processes for determining eligibility. In general, these have been longstanding obstacles. Key obstacles to efforts to simplify or realign financial eligibility rules include program cost implications, restrictive federal laws and regulations, the need for collaboration of multiple executive branch agencies and legislative committees, and differences in goals and purposes of some federal programs. Program cost implications is a major obstacle to efforts to simplify or realign financial eligibility rules. Financial eligibility rules serve to target and limit benefits to those considered in need and also to ration federal and nonfederal dollars. Yet, modifying financial eligibility rules for purposes of simplifying them or making them more consistent across programs can result in changes to the number of people who are eligible for assistance or the benefit levels they receive. For example, if such rule changes have the effect of raising income eligibility limits, more people will be eligible for assistance and program costs will tend to increase. On the other hand, if such rule changes have the effect of lowering income eligibility levels, some people will no longer be eligible for assistance from certain programs. Among means-tested programs, pressures in recent years have generally been to increase coverage, such as by loosening financial eligibility standards. As we have seen, much of the variation in financial rules derives from federal statutes and regulations. For the 11 programs we reviewed, most program requirements were set in statute. Agency regulations also provide annual guidance such as income thresholds used to establish eligibility and benefit amounts. State officials believe that because of federal statutes and regulations they had very little flexibility in aligning financial eligibility rules across programs. Such alignment can involve standardizing various types of rules, including those pertaining to income limits, whose income is counted, what income is counted, and deductions from income. While states have aligned some financial rules to simplify their TANF, Food Stamp, and Medicaid rules, most of these changes were modest and officials were frustrated by federal barriers that prevented better aligning the financial rules across programs. For example, officials in two states told us that they believed the federally established income limits in the Food Stamp program (130 percent of federal poverty guidelines) were set too low. They explained that although their states had the flexibility to lower their TANF and Medicaid income limits to match the limit for food stamps, this option was not appealing because it would result in decreased participation in TANF and Medicaid. The division of legislative and executive responsibility, while allowing multiple points of access for members of Congress, interest groups, and the affected public, can be an obstacle to states’ ability to pursue system integration. Making systematic changes to programs’ financial eligibility rules can be very difficult, because it would generally require the collaborative efforts of multiple congressional committees (in the case of laws) or multiple federal agencies (in the case of regulations). Several reviews of the legislative and executive governance mechanisms that affect program direction at the federal level have been conducted in recent years. One study found that primary responsibility for most of the approximately 80 major programs that assist low-income families and individuals resides in 19 congressional committees and 33 subcommittees. For the 11 programs in our review, we identified 9 committees and 6 appropriations subcommittees with legislative responsibility for the programs. In addition, the 11 programs spanned 3 executive branch departments and 1 independent agency. The different purposes of the various means-tested programs and the lack of overarching goals also create a barrier to administrative streamlining. For example, state and local officials frequently cited the Food Stamp program rules as overly complex and rigid, with too much emphasis on quality control. The officials were concerned that quality control in the program focused, to a great extent, on detailed financial matters such as small amounts of overpayments and underpayments, timeliness of changes in income, and recalculation of benefit levels. The officials believe that while a focus on financial integrity through process and payment accuracy was important, too much attention on quality control has contributed to increased program complexities, decreased program participation, and high administrative costs. In comparison, the states receive block grants from the federal government to operate TANF programs and have significant autonomy in these programs. In the states we visited, officials told us that the flexibility in TANF provided them the opportunity to develop more effective cash assistance programs than existed prior to welfare reform. The officials believed that having greater flexibility in other means-tested programs such as Food Stamps would further their efforts to streamline eligibility determination processes. Over a period of more than 60 years, a large number of means-tested programs have been established to meet diverse goals and serve the needs of different populations of low-income families and individuals. However, when viewed from a service provider’s or client’s perspective, the existing processes for determining eligibility and calculating benefit levels in the 11 means-tested programs we reviewed are often cumbersome to administer and burdensome for families who apply for assistance. The variations and complexity of these programs’ financial eligibility rules, as well as the fact that numerous agencies administer the programs, have contributed to the formation of these cumbersome processes. There has been a long history of calls for the need to simplify eligibility rules and processes for means-tested programs. While there have been some efforts to make such improvements, little progress has been achieved overall. This limited progress reflects the broad scope and complex intricacy of the obstacles that confront any efforts to make large-scale improvements in this area, including the difficulty of grappling with the cost implications of changing financial eligibility rules. For example, the Simplified Food Stamp program was designed to allow states to align the TANF and Food Stamp programs’ rules but few states have implemented this option. Most states have not used the Simplified Food Stamp program, in large part, because they viewed the program’s requirement for cost neutrality within any fiscal year as being too restrictive. Many federal, state, and local officials recognize that additional efforts to simplify or coordinate eligibility determination processes are needed. However, a lack of information on the likely consequences of such efforts hinders further steps to improve the administration of means-tested federal programs. While many of these officials believe that administrative cost savings could be achieved from improved coordination or simplification, data are not available to evaluate the potential savings from such actions. Given the paucity of data on the costs of determining eligibility and calculating benefit levels in the existing system, it is difficult to quantify the costs of the variations and complexity of financial eligibility rules. Yet these costs appear to be substantial and even increases in efficiencies of the processes of 10 to 20 percent could potentially save billions of dollars. Moreover, the simplification of eligibility rules and processes offers the prospect of reducing burdens on caseworkers and applicants. On the other hand, simplifying financial eligibility rules could potentially result in increased program costs. To facilitate further progress in this area, information is needed about the effects of changes in financial eligibility rules and procedures on program and administrative costs, and access to programs by families and individuals. This information could be instrumental in designing a system for administering means-tested programs that is less costly to taxpayers, less onerous for workers, less frustrating for applicants, and that potentially reduces improper payments in federal programs. The Congress should consider authorizing state and local demonstration projects designed to simplify and coordinate eligibility determination processes for means-tested federal programs. Such projects would provide states and localities with opportunities to test changes designed to simplify or align the financial eligibility rules for programs, increase the number of programs for which eligibility can be determined jointly, and expand data sharing across agencies to facilitate eligibility determinations. Once authorized, states and/or localities could submit proposals for demonstration projects and relevant federal agencies working in a coordinated manner could review them, suggest modifications as needed, and make final approval decisions. Demonstration projects would include waivers of federal statutes and regulations as needed and deemed appropriate. While our review covered 11 means-tested federal programs, we are not suggesting that the demonstration projects must include all of these programs or exclude others. Consistent with a focus on citizen- centered government, states should be given the opportunity to try various approaches aimed at streamlining or simplifying eligibility determination processes that consider all feasible programs. Projects must be given sufficient time to be fully implemented and must include an evaluation component. Cost neutrality would be most desirable for federal approval of these projects. However, projects should not be rejected solely because they are unable to guarantee cost neutrality over the short run. It would be expected that, over a period of time, state and federal efforts to streamline eligibility determination processes would create administrative cost savings that could help offset any increased program costs. The Office of Management and Budget and the Departments of Agriculture, Health and Human Services, and Housing and Urban Development provided written comments on a draft of this report. These comments are presented and evaluated below and are reprinted in appendix II through appendix V. The agencies generally agreed with the report’s findings. The draft version of the report contained a recommendation to the Director of OMB to develop legislative proposals that would authorize state and local demonstration projects designed to simplify and coordinate eligibility determination processes for means- tested federal programs. In its comments, OMB indicated its support for program simplification but did not indicate that it would implement the recommendation. OMB agreed with our assessment of the longstanding obstacles to program simplification. However, OMB said that legislative authority for demonstration projects may not be necessary for states to pursue many simplification strategies because many programs, such as Food Stamps, already have significant waiver authority, and many states have not fully utilized the flexibility they have in programs such as TANF, Medicaid, and SCHIP. We agree that states have substantial flexibility in some programs; our report provides examples of how some states have used this flexibility to coordinate financial rules or processes. Our proposal for the authorization of demonstration projects is motivated primarily by the need to obtain more detailed and systematic information about the effects of various simplification strategies on key factors such as program and administrative costs. These demonstration projects would provide states with whatever additional waiver authority is needed and appropriate. OMB acknowledged that demonstration projects could be helpful in achieving sweeping standardization across programs, particularly if current waiver authority in certain programs, such as HUD’s rental assistance programs, is not designed to achieve such sweeping standardization. OMB added that program reauthorization also presents an opportunity to propose changes to program rules that may more immediately and effectively address simplification. We agree that program reauthorization presents a good opportunity to address simplification, especially on a program-specific basis. However, demonstration projects would provide the ability to make comprehensive changes in a multiplicity of programs to coordinate eligibility rules and processes, and to obtain information about the effects of these changes. OMB also expressed concern about the implications of program simplification on program costs and argued that simplification should not be a license to expand eligibility and increase spending beyond current levels. OMB questioned whether we potentially overestimate the administrative cost savings that would result from program simplification, which may underestimate the significance of program cost implications. We agree that there is a lot of uncertainty about the cost implications of program simplification. We believe that demonstration projects could provide useful empirical evidence about the potential for administrative cost savings and the ability to limit program cost increases. Finally, OMB maintained that if demonstration projects are authorized, the review of state proposals for such projects would most appropriately be lead by a federal agency such as HHS, in collaboration with other federal agencies, rather than by OMB as we had originally recommended. We believe that whatever federal agency or agencies were to be designated as the lead, the critical factor would be to establish a coordinated federal review process that facilitates efficient state and local interactions with the federal government. USDA commented that the report has made a noteworthy effort to compare the key variations in financial eligibility rules among the eleven federal programs reviewed. With regard to food stamps, USDA stated that making legislative changes during reauthorization would be a better approach to streamlining and simplifying Food Stamp program rules than mounting a series of demonstration projects. We agree that reauthorization presents an opportunity for simplifying Food Stamp rules and have recommended this in an earlier GAO report. USDA also provided additional information about the use of direct certification in the School Meals program and categorical eligibility for WIC, which we added to the report. HHS said in its comments that this is a very important report that verifies the lack of standardization and complexity of applying for means-tested programs. However, HHS added that in recommending demonstration projects, the report does not offer any suggestions on how to build upon or make this new initiative more productive than past efforts. We agree that the report does not address in a detailed and thorough manner the issues regarding how such demonstration projects should be designed and implemented. We believe that these issues would be best addressed with input from diverse stakeholders, especially the various federal and state agencies that have longstanding experience administering and overseeing these means-tested programs. HHS noted that while considerable progress has been made in developing joint application processes, there has recently been a recognition that this model has limitations. HHS explained that increasing numbers of Medicaid-eligible persons come from working families not eligible for other programs. HHS added that it is important to strive to effectively reach and serve both this population and the population eligible for multiple programs, so it continues to work on both joint and single- purpose application processes. We agree with HHS that both types of application processes have appropriate uses. HHS also said that the report did not acknowledge sufficiently the progress in simplifying eligibility determination that has been made in SCHIP. In response, we added a section to provide information on state efforts to streamline and simplify administrative processes for SCHIP and Medicaid programs for children. In addition, HHS questioned whether our review of Medicaid, which focused on TANF-related Medicaid groups and policies, should also have included SSI-related groups and policies. Because the primary focus of our review was on means-tested programs commonly used by low-income families and children, the report does not include a discussion of SSI- related groups and policies. Finally, HHS commented that states have significant flexibility to expand and simplify eligibility for Medicaid to coordinate with other programs that serve low-income families. In its comments, HUD agreed that simplification of the financial eligibility and benefit rules for means-tested federal programs is needed and said that the department is interested in exploring participation in a demonstration program in this area. HUD also noted that it has an effort underway—the Rental Housing Income Integrity Initiative—that has a major goal of simplifying cumbersome income and rent policies in public and assisted housing programs. HUD also provided estimates of administrative costs for housing assistance programs and the percentage of TANF recipients receiving housing assistance; we revised the report to incorporate these estimates. We also received technical comments on a draft of this report from the Departments of Agriculture, Health and Human Services, and Housing and Urban Development, the Social Security Administration, and three of the five states discussed in the report—Delaware, Nebraska, and Utah—and we incorporated these comments where appropriate. As agreed to with your staff, unless you publicly release its contents earlier, we will make no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Subcommittee on Human Resources, House Committee on Ways and Means; the Director of the Office of Management and Budget; the Secretary of Health and Human Services; the Secretary of Agriculture; the Secretary of Housing and Urban Development; the Acting Commissioner of Social Security; other interested congressional committees; and interested parties. Copies will be made available to others upon request. The report is also available on GAO’s home page at http://www.gao.gov. Please contact me on (202) 512-7215 if you have any questions about this report. Other GAO contacts and staff acknowledgments are listed in appendix VI. In conducting our review, we obtained and analyzed information from a variety of federal, state, and local sources. At the federal level, we interviewed officials at three departments (Agriculture, Health and Human Services, and Housing and Urban Development) and two agencies (Centers for Medicare and Medicaid Services and the Social Security Administration). We visited five states and generally met with officials of state, local, and community-based organizations in two cities in each state—one urban location and one rural community. Our fieldwork was performed in three counties (Contra Costa, San Mateo, and Placer) in California; Georgetown and Wilmington, Delaware; Louisville and Barren County, Kentucky; Omaha and Crete, Nebraska; and Salt Lake City and Logan, Utah. In selecting the states for our fieldwork, we sought to include states (1) that had undertaken welfare simplification or service integration initiatives, (2) with combined welfare and workforce agencies, (3) that had enhanced automated systems for eligibility determinations and benefit level calculations, (4) with state-supervised and county-administered welfare systems, and (5) that were geographically diverse. To obtain data on the extent to which Temporary Assistance for Needy Families (TANF) families participate in multiple means-tested federal programs, we reviewed and analyzed the results of two national Bureau of the Census surveys: The March 2000 supplement of the Current Population Survey (CPS) — The survey has information on TANF families’ participation in multiple federal programs, is conducted monthly of about 47,000 households, and is designed to be a nationally representative sample of the country. The total response rate for the March 2000 CPS supplement was about 86 percent. The Survey of Income and Program Participation (SIPP)—A nationally representative sample of approximately 20,000 households, SIPP consists of information on social and demographic characteristics for each person in the household. SIPP contains other household data in areas such as labor force activity, income, assets and liabilities, postsecondary education, private health insurance coverage, pension plan coverage, and participation in selected means-tested federal programs. To determine the extent and sources of variation in financial eligibility rules among the 11 programs, we reviewed relevant federal statutes and regulations, as well as the 2000 Green Book (Committee on Ways and Means, U.S. House of Representatives) and the 2000 Catalog of Federal Domestic Assistance (published by the Office of Management and Budget and the General Services Administration). We also reviewed information contained in CRS’ December 1999 report, Cash and Noncash Benefits for Persons With Limited Income: Eligibility Rules, Recipient and Expenditure Data, FY 1996-FY 1998. We discussed the financial eligibility rules with federal program officials and reviewed relevant documents such as program handbooks and policy guidance. In addition, during our site visits we met with state officials, local office managers, and eligibility workers to obtain their views on variations in financial eligibility rules. To obtain information about how the variation in financial eligibility rules and other factors affects the administrative processes for determining eligibility, we discussed these issues with state and local eligibility workers and supervisors to obtain their views. During these meetings, staff assisted us in identifying rule differences and the extent to which these variations affected the eligibility determination processes. We also reviewed state-prepared documents such as memorandums, discussion papers, and reports. We met with experts in the areas of means-tested federal programs and eligibility simplification and with advocacy groups to obtain their views on how the variations in financial rules impacted clients and their efforts to access benefits and services. We also conducted a content analysis of the multiple applications used by different programs in Utah to determine the amount of overlap in questions. To determine how federal, state, and local agencies have sought to streamline or coordinate eligibility determination processes, we met with federal program officials to discuss their efforts to simplify eligibility and work more closely with other departments and agencies. In addition, we reviewed statutes, program guidance, and other documents that identified actions to streamline and coordinate at the federal level. As part of our fieldwork, we met with state and local officials to discuss their efforts to simplify eligibility determination processes. We discussed some of these streamlining efforts with frontline workers, including eligibility workers and supervisors. We also reviewed documents obtained at these meetings, such as reorganization strategies and other state and local planning documents. To obtain estimates of federal costs for program administration, we used administrative cost data from federal agency sources for programs where such data were available: TANF, Food Stamps, Medicaid, School Meals, Housing Choice Voucher, Low-rent Public Housing, and SSI. For the other programs, we developed estimates of federal administrative costs as follows. For the WIC program, overall administrative cost data available from the agency includes nutrition education and assessment costs as part of the administrative cost category. To develop our estimate, we computed and removed the amount (two-thirds of the costs) associated with nutrition assessment activities and attributed the remainder to general administration. For the Child Care program, eligibility determination data are gathered separately from administrative cost data by the states. To make a fiscal year 1998 estimate we developed separate estimates for eligibility determination costs and other administrative costs and added the components together. For the LIHEAP and SCHIP programs, the maximum allowable administrative cost percentage (10 percent) was applied to the separate appropriations for 1998 where administrative costs could be applied. Our work was done between September 2000 and August 2001 in accordance with generally accepted government auditing standards. The following people also made important contributions to this report: George Erhart; Sheila Nicholson; Mikki Holmes; Daniel Schwimer; and Barbara Alsip. Child Care: States Increased Spending on Low-Income Families (GAO- 01-293, Feb. 2, 2001). Food Stamp Program: States Seek to Reduce Payment Errors and Program Complexity (GAO-01-272, Jan. 19, 2001). Food Assistance: Activities and Use of Nonprogram Resources at Six WIC Agencies (GAO/RCED-00-202, Sept. 29, 2000). Benefit and Loan Programs: Improved Data Sharing Could Enhance Program Integrity (GAO/HEHS-00-119, Sept. 13, 2000). Welfare Reform: Improving State Automated Systems Requires Coordinated Federal Effort (GAO/HEHS-00-48, Apr. 27, 2000). Welfare Reform: States’ Experiences in Providing Employment Assistance to TANF Clients (GAO/HEHS-99-22, Feb. 26, 1999). Welfare Reform: Few States are Likely to Use the Simplified Food Stamp Program (GAO/RCED-99-43, Jan. 29, 1999). Medicaid: Early Implications of Welfare Reform for Beneficiaries and States (GAO/HEHS-98-62, Feb. 24, 1998). Welfare Programs: Opportunities to Consolidate and Increase Program Efficiencies (GAO/HEHS-95-139, May 31, 1995). Means-Tested Programs: An Overview, Problems, and Issues (GAO/T- HEHS-95-76, Feb. 7, 1995) Welfare Simplification: States’ Views on Coordinating Services for Low- Income Families (GAO/HRD-87-110FS, Jul. 29, 1987). Welfare Simplification: Thirty-Two States’ Views on Coordinating Services for Low-Income Families (GAO/HRD-87-6FS, Oct. 30, 1986). Welfare Simplification: Projects to Coordinate Services for Low-Income Families (GAO/HRD-86-124FS, Aug. 29, 1986). Needs-Based Programs: Eligibility and Benefit Factors (GAO/HRD-86- 107FS, Jul. 9, 1986).
About 80 means-tested federal programs assisted low-income people in 1998. GAO reviewed 11 programs that assisted families and individuals with income, food, medical assistance, and housing. Despite substantial overlap in the populations they serve, the 11 programs varied significantly in their financial eligibility rules. At the most basic level, the dollar levels of the income limits--the maximum amounts of income an applicant can have and still be eligible for a program--vary across programs. Beyond this, differences exist in the income rules, such as whose income and what types of income are counted. The variations and complexity of the federal financial eligibility rules, along with other factors, have led to processes that are often duplicative and cumbersome for both caseworkers and applicants. Overall, federal, state, and local entities have made little progress in simplifying or coordinating eligibility determination processes. States realigned some of the financial rules, but only to a limited extent. Another approach uses computer systems to establish joint eligibility determination processes that a single caseworker can administer. Efforts to simplify or better coordinate eligibility determination processes confront many obstacles, including restrictive federal program statutes and regulations. In addition, program costs may rise if financial eligibility rules are changed.
Response to Recommendations Based on information provided by USAID and USDA and our own analysis, we determined that recommendation 1 has not been implemented. Although the Interagency Policy Committee has met regularly to develop a governmentwide food security strategy, the group has yet to publish its strategy. However, the Interagency Policy Committee has established an objective to help rural farmers feed themselves and to help countries establish sustainable agriculture systems by (1) investing in country-led food security plans, (2) coordinating stakeholders strategically, (3) supporting multilateral mechanisms, (4) ensuring a sustained commitment, and (5) focusing on a comprehensive approach to agriculture productivity. The Interagency Policy Committee has also identified seven principles for its food security strategy, including the following: stimulate postharvest and private sector growth, support women and families, maintain the natural resource base, expand knowledge and training, increase trade flows, and support an enabling policy environment. Based on information provided by USAID and USDA and our own analysis, we determined that recommendation 2 has not been implemented. USAID officials stated that they plan to update Congress on progress toward implementation of a governmentwide food security strategy as part of the agency’s 2008 Initiative to End Hunger in Africa report; the full version of this report was not publicly available as of September 2009. A summary report provided by USAID identifies three food security pillars—(1) immediate humanitarian response, (2) urgent measures to address causes of the food crisis, and (3) related international policies and opportunities—used to respond to the 2007 and 2008 global food crisis. However, the governmentwide strategy has not yet been finalized, and it is premature to report on its implementation. Oversight Questions 1. What coordination and integration mechanisms has the U.S. government established to enhance the efficiency and effectiveness of U.S. international food assistance? 2. What is the nature and scope of current U.S. global food security activities? What agencies, programs, and funding levels are involved? How are NGOs, international organizations, foreign governments, and host governments involved in these efforts? 3. What progress have U.S. agencies made in developing an integrated governmentwide global food security strategy? What are the goals and timeframe for the implementation of the strategy? 4. What key criteria has the U.S. government developed to assess the implementation of the global food security strategy? Does the U.S. government plan to report annually to Congress on the results of the strategy?
The number of individuals experiencing hunger has grown to more than 1 billion worldwide in 2009, up from a record 963 million in 2008, according to the United Nations (UN) Food and Agriculture Organization (FAO). FAO attributes this upsurge in hunger to the global economic crisis, which followed rising food and fuel prices from 2006 to 2008. However, even before these crises, the number of undernourished people had been increasing annually in sub-Saharan Africa--where some of the world's food needs are greatest--underscoring the need to improve international food assistance. International food assistance includes both emergency food aid and long-term food security programs. Due to rising food prices, increasing conflicts, poverty, and natural disasters, in 2007, a record 47 countries--27 located in Africa--faced food crises requiring emergency assistance, according to FAO. To address these emergencies, countries provide food aid as part of a humanitarian response to address acute hunger through either in-kind donations of food or cash donations. In-kind food aid is food procured and delivered to vulnerable populations, while cash donations are given to implementing organizations, such as the UN World Food Program (WFP), to procure food in local and regional markets, also referred to as local and regional procurement (LRP). International food assistance also includes a development-focused response to address long-term chronic hunger through food security programs. While food aid has helped to address the immediate nutritional requirements of some vulnerable people in the short term, it has not addressed the underlying causes of persistent food insecurity. Our objectives were to (1) update U.S. agencies' responses to GAO's previous international food assistance recommendations and (2) identify potential oversight questions for congressional consideration. Since 1996, we have published 18 products that provided insight, many with recommendations, on international food assistance. Specifically, in the past 3 years, we issued four reports with 16 recommendations to improve the efficiency of U.S. food aid and food security programs. Over the course of our work, we also identified improvements that were needed, as well as obstacles that affect the success of program planning and implementation. As a result, we have identified five issues for Congressional consideration to ensure more efficient and effective international food assistance: (1) coordination and integration, (2) needs assessments and market information, (3) transportation and logistics, (4) nutrition and food quality control, and (5) monitoring and evaluation.
To ensure that its diplomatic corps can communicate in the languages of host countries, State requires that FSOs assigned to LDPs at overseas posts meet minimum specified competency levels for both speaking and reading. As of September 30, 2016, State had 4,461 overseas positions worldwide that required language proficiency and 872 overseas positions where proficiency was preferred but not required, known as language- preferred positions. State categorizes foreign languages according to the time required for a native English speaker to learn them: Category I—World languages (e.g., French and Spanish) Category II—Difficult world languages (e.g., German) Category III—Hard languages (e.g., Russian and Urdu) Category IV—Super-hard languages (e.g., Arabic and Chinese) According to State documents, the time it takes to achieve general proficiency depends on the difficulty of the language. World languages require 24 to 30 weeks, difficult world languages require 36 weeks, hard languages require 44 weeks, and super-hard languages require 88 weeks to achieve general proficiency. State groups countries of the world into areas of responsibility under six geographic bureaus: Bureau of African Affairs (AF) Bureau of East Asian and Pacific Affairs (EAP) Bureau of European and Eurasian Affairs (EUR) Bureau of Near Eastern Affairs (NEA) Bureau of South and Central Asian Affairs (SCA) Bureau of Western Hemisphere Affairs (WHA) The number of overseas LDPs varies significantly by bureau, with the highest number (1,491) at WHA posts and the lowest (233) at SCA posts. Most LDPs requiring category I and II languages are at AF, EUR, and WHA posts, while most LDPs requiring category III and IV languages are in EAP, EUR, NEA, and SCA. Three of the four super-hard languages (Chinese, Korean, and Japanese) are spoken in the countries in EAP’s area of responsibility; the remaining super-hard language (Arabic) is widely spoken throughout the NEA area. The percentages of Foreign Service positions that are LDPs also vary by geographic bureau, with the highest percentage under WHA. Figure 1 shows the geographic bureaus’ areas of responsibility and numbers of LDPs relative to the numbers of Foreign Service positions. State uses the foreign language proficiency scale established by the federal Interagency Language Roundtable to rank an individual’s language skills. The scale has six levels, from 0 to 5—with 0 indicating no practical capability in the language and 5 indicating highly articulate, well-educated, native-speaker proficiency—to identify a language learner’s ability to speak, read, listen, and write in another language. General professional proficiency in speaking and reading—3/3 (speaking/reading)—is the minimum level required for most Foreign Service generalist LDPs. According to State’s fiscal years 2016-2020 Five Year Workforce and Leadership Succession Plan, this level of proficiency provides officers with the ability to participate in most formal and informal discussions of practical, social, and professional topics. Some entry-level Foreign Service generalist and most Foreign Service specialist LDPs are designated at or below the 2/2 level. Table 1 shows the language skill requirements for each proficiency level. The difference between the second and third proficiency levels—the ability to interact effectively with native speakers—is significant in terms of training costs and productivity for certain languages. For example, State provides about 44 weeks of training to bring a new speaker of a language classified as super-hard, such as Arabic, up to the second level. Moving to a level-3 proficiency usually requires another 44 weeks of training, which is generally conducted at field schools overseas. In contrast, State provides 24 weeks of training to bring a new speaker of most “world” languages to a level 3. State primarily uses language training—typically at the FSI—to meet its foreign language requirements. FSI’s School of Language Studies offers training in about 70 languages. State also offers full-time advanced training in super-hard languages at a few overseas locations, including Beijing, China; Seoul, South Korea; and Taipei, Taiwan, among other locations. In addition, overseas posts offer part-time language training through post language programs, and FSI offers distance learning courses to officers overseas. Since October 2008, State has reduced the number of LDPs staffed by FSOs who do not meet language requirements by 8 percentage points, from 31 to 23 percent. However, State continues to face notable shortfalls in meeting its foreign language requirements for overseas LDPs that may adversely affect diplomatic operations. State officials we met with in Washington, D.C., and at overseas posts identified various challenges that may affect State’s ability to address its foreign language shortfalls. Additionally, according to FSOs we interviewed, both language proficiency and gaps in proficiency have, in some cases, affected State’s ability to, for example, properly adjudicate visa applications, effectively communicate with foreign audiences, and perform other critical diplomatic duties. The percentage of overseas LDPs staffed by FSOs who did not meet the positions’ language proficiency requirements has decreased since October 2008 (see table 2). As of September 30, 2016, 23 percent of overseas LDPs were staffed by FSOs who did not meet both the speaking and reading proficiency requirements for their positions; According to State officials, State granted language waivers to most of these FSOs. In contrast, as of October 2008, 31 percent of FSOs in overseas LDPs did not meet these requirements. However, the proficiency shortfall widens when unstaffed positions are included. As of September 2016, 69 percent (3,077 of 4,461) of overseas LDPs were staffed by FSOs who met both the speaking and the reading requirements, while 31 percent (1,384 of 4,461) of LDPs either were staffed by FSOs who did not meet the positions’ requirements or remained vacant. State officials noted that, among other factors, the overall increase of LDPs from 2008 through 2016 contributed to the slow progress in improving the rate of LDPs filled by FSOs who meet the positions’ requirements. State officials also noted that many of the new LDPs require proficiency in hard or super-hard languages, which entails 44 to 88 weeks of training. The officials further stated that, given the absence of an existing cadre of foreign-language speakers who can be staffed to LDPs, many positions may go unstaffed. While language proficiency gaps vary among posts, State faces some of its largest proficiency gaps in several priority languages. According to State M/DGHR officials, State designates languages as priority for various reasons. For example, Mandarin Chinese, Dari, Farsi, Pashto, Hindi, Urdu, Korean, and Arabic—languages spoken in China, Iran, India, Korea, and throughout the Near East—are priority languages. State defines priority languages as languages that are of critical importance to U.S. foreign policy, are experiencing severe shortages or staffing gaps, or present specific challenges in recruiting and training. In addition, officials from State’s Bureau of Consular Affairs identified Mandarin Chinese and Spanish, among others, as priority languages, citing the need for language-qualified entry-level professionals to provide consular services in countries where these languages are spoken as well as reduced entry- level hiring and resultant staffing gaps in LDPs worldwide. As figure 2 shows, as of September 2016, the largest proficiency gaps for priority languages were in Arabic, Dari, Farsi, and Urdu. According to State data, 36 percent of LDPs requiring Arabic (106 of 291 LDPs), 53 percent of LDPs requiring Dari (9 of 17 LDPs), 36 percent of LDPs requiring Farsi (9 of 26 LDPs), and 44 percent of LDPs requiring Urdu (12 of 27 LDPs) were staffed by FSOs who did not meet the proficiency requirements for the positions. State continues to face proficiency gaps worldwide, most notably in priority languages categorized as hard or super-hard. Some of the most significant gaps are found in NEA, AF, and SCA (see fig. 3). In NEA, 144 of 392 LDPs (37 percent) were staffed by officers who did not meet the positions’ proficiency requirements; 88 LDPs were vacant. In AF, 118 of 349 LDPs (34 percent) were staffed by officers who did not meet the positions’ proficiency requirements; 38 LDPs were vacant. In SCA, 66 of 210 LDPs (31 percent) were staffed by officers who did not meet the positions’ proficiency requirements; 23 LDPs were vacant. State officials we interviewed said that several challenges, including some that are unrelated to language proficiency, may affect the department’s ability to staff LDPs with officers who meet both the speaking and reading requirements for the positions. According to these officials, language proficiency shortfalls are partially attributable to the following factors: Long training periods. Training to achieve general proficiency in hard and super-hard languages can take up to 2 years. According to State officials, this may result in a position going unfilled, given the absence of an existing cadre of foreign-language speakers who can be staffed to LDPs. FSOs we spoke with stated that the length of time it takes to achieve a 3/3—the minimum standard for general proficiency—in a hard or super-hard language may discourage some officers from applying for positions that require proficiency in these languages. According to State, for an FSO with no previous language experience, achieving a 3/3 generally takes 44 weeks of study for a hard language and 88 weeks for a super-hard language. Heritage-speaker restrictions. Because of security concerns, in certain instances State does not allow Chinese or Russian “heritage speakers” to serve in their ancestral countries if they have relatives there. In addition, according to State officials, Egypt does not grant diplomatic status to Americans with dual citizenship or who have a claim to Egyptian citizenship, which limits State’s ability to staff LDPs in Egypt with FSOs who speak Arabic. According to a State official, heritage speakers can leverage their native level of proficiency to better understand subtle language cues that may be missed by non- native speakers. For example, State officials in China and Korea stated that to effectively monitor social media requires someone to be a near-native speaker in order to understand language tone and nuance. Restrictions on tour frequency and length. According to State officials, State does not encourage FSOs to serve consecutive tours in the same country and generally limits each tour to a maximum of 2 or 3 years. In a country that we visited, an official told us that State’s current system actively discourages FSOs from serving multiple tours in the same country because of concerns that the FSOs may lose objectivity or begin to view issues from the host country’s, rather than the U.S. government’s, perspective. In addition, according to State officials, there has been an increase in 1-year tours in countries where hard and super-hard languages are spoken. Given that language training can take up to 2 years for hard and super-hard languages, FSOs may not be willing to undergo such extensive training for a 1- year position. Tour curtailments and staff rotations. According to some State officials, constant movement of staff—often because of officers’ curtailing their tours to attend family or medical issues or rotating to another location after they have reached the maximum allowed term in a given post—contributes to LDPs’ remaining vacant or being staffed with personnel who do not meet the positions’ language requirements. For example, a regional security officer (RSO) at a post we visited stated that although multiple RSOs at that post had ended their tours and left their positions, no replacement RSOs had met the positions’ foreign language proficiency requirements. As a result, several LDPs remained unfilled, and the remaining RSOs had to make up for the shortfall in staff. Additionally, according to State officials, certain LDPs in Iraq and Afghanistan that are deemed “no- gap posts” must be filled by available FSOs regardless of whether they meet the proficiency requirements. Current and former FSOs, including ambassadors, whom we interviewed, reported positive and negative effects, respectively, of language proficiency and of proficiency gaps on officers’ ability to perform critical diplomatic functions (see table 3). State documents also report such effects. To mitigate the impact of language proficiency gaps, post officials told us that in some instances they leverage the foreign language skills of locally employed staff (LE staff). According to post officials, FSOs may ask LE staff to draft or translate e-mails, schedule meetings, and translate during meetings, among other tasks. However, post officials said that there are limitations to using LE staff. For example, FSOs said that they cannot rely on LE staff for language support when discussing politically sensitive issues and that using LE staff as translators is less desirable than having a firsthand conversation with an external contact. In addition to using LE staff, officers also rely on professional translators and interpreters for language assistance. According to State officials, State conducts a review of all LDPs every 3 years to reevaluate posts’ language needs. State officials in Washington, D.C., described this triennial review as a post-driven exercise, stating that each post is best positioned to understand its language needs. According to a State memo, the triennial review is the foundation for applying foreign language designations and establishing State’s language policies. In April 2010, in response to a recommendation in our 2009 report, State’s Director General of the Foreign Service and Director of Human Resources implemented an updated LDP review process to occur every 3 years, replacing a previously annual cycle. According to State documents, the updated process requires State’s geographic bureaus; Bureaus of Diplomatic Security, Consular Affairs, and International Narcotics and Law Enforcement Affairs; and worldwide missions to review all LDPs assigned to their area of responsibility, regardless of the bidding cycle, on a 3-year basis. According to State officials, the 3-year timeframe allows State to strategically plan for, and project, future LDP needs in an effort to minimize the overall number of LDPs that remain vacant or unstaffed. Figure 4 shows the triennial LDP review process. While State’s triennial review process is intended to address the language needs of its overseas posts, FSOs we interviewed expressed varying views on the extent to which the outcomes of the process meet posts’ reported needs. State’s policies indicate that operational need should determine designation of positions as LDPs and required proficiency levels. However, views expressed by geographic bureau officials and FSOs we met with at overseas posts suggest that State officials also consider other factors, such as staffing concerns, when making LDP decisions. In addition, some State officials said that the triennial reviews lack rigor, which may result in posts’ maintaining preexisting LDP numbers and levels without having adequately identified the current language needs of each position. Furthermore, in 2013, the State Office of Inspector General (OIG) identified various deficiencies with the triennial review process. For example, the OIG found that State’s oversight of LDPs is insufficient to identify over- or underdesignation of language requirements. While State’s process for designating LDPs is intended to address the language needs of its overseas posts, FSOs we interviewed expressed varying views on the extent to which the designations resulting from the triennial reviews meet their posts’ needs. Some post managers we interviewed said that their post or embassy section generally has the appropriate number of LDPs at adequate proficiency levels to meet diplomatic goals. However, some of these officials also said that, while the current proficiency level requirements are adequate, higher proficiency levels would be preferable. For example, consular section managers in countries where a hard or super-hard language is spoken said that while a speaking and reading proficiency of 2/1 or 2/0 is currently required for most of their consular employees, a higher proficiency level, such as a 3/3, would be preferable. State officials in headquarters explained that the language proficiency level set for entry- level consular positions in hard and super-hard languages is due to department policy with regard to training limitations for entry-level officers. One consular chief said that the section “gets by with what it has,” while another said that assistance from LE staff helps to fill the language gap. One post security manager said that the year of language training that security officers generally receive to operate in a country with a super-hard language provides only a “survival” level of proficiency and does not prepare them to function on a professional level. While State requires a proficiency level of 3/3 in speaking and reading for most Foreign Service generalist LDPs, post managers as well as junior FSOs said that greater proficiency would better equip them to communicate and negotiate with foreign counterparts and advance U.S. diplomatic goals. One public diplomacy manager said that, in an ideal, resource-neutral environment, he would like all of his public affairs officers to have a 4/4 level of proficiency. One political officer with 3/3 proficiency said she struggles to understand some of what is said during meetings and that a higher level of proficiency would be more appropriate for the needs of the job. Post officers said that high proficiency levels, for example, higher than 3, enable officers to detect nuance and subtle cues in conversations, build greater rapport, have more contacts and access to foreign audiences, participate in more unscripted conversations, and answer questions “off the cuff.” FSOs also suggested that certain political, economic, public affairs, and consular officer functions, in particular, could benefit from higher proficiency levels. However, post officials recognized that there are tradeoffs associated with requiring higher levels, including longer training and higher costs. In addition, post officials indicated that current language designations do not always reflect the needs of their positions or embassy sections. An economic section chief said that while her position is not an LDP, she believes it should be. Some post managers, including two RSOs in LDPs, said that they felt they were able to successfully perform their duties without being language proficient. One post official said that language proficiency was not critical to the execution of his duties because he spends most of his time in the embassy supervising American staff and interacting with English-speaking counterparts and can obtain any needed translation assistance from LE staff. Some post officers recommended reducing the required proficiency levels for certain positions that entail limited interaction with foreign counterparts, such as human resource positions focused on management of U.S. staff. Although State’s policies indicate that operational need is the determining criterion for designating a position as an LDP, officials we spoke with cited other factors that may influence LDP designations. According to State’s Foreign Affairs Manual (FAM), State should designate positions as requiring language proficiency only when it is essential to enhancing U.S. effectiveness abroad. According to the FAM, factors that posts should consider when assessing their LDP needs include the necessity of using the language to successfully execute the requirements of the position, the importance host-nation interlocutors attach to U.S. diplomats’ ability to speak the language, and the English language capabilities of the embassy’s LE staff (see app. II for a full list of the FAM criteria). However, geographic bureau officials and post managers told us that they also consider factors such as staffing and cost concerns when designating LDPs and determining proficiency requirements. Staffing concerns. While State’s guidance states that the department must identify its language needs irrespective of the number of likely bidders, embassy section heads at the posts we visited said staffing concerns affect their decisions about designating positions as LDPs and requiring certain proficiency levels. For example, embassy managers in countries where super-hard or hard languages, such as Arabic and Russian, are spoken said that certain positions have been designated as not requiring language proficiency or designated at a lower proficiency level to increase the likelihood of filling the positions. Managers also said that, while they would prefer to require higher levels of language proficiency, they sometimes require lower levels to avoid delaying the arrival of FSOs at posts who would otherwise have to spend longer periods in language training. Some State geographic bureau officials spoke of significant tension between quickly filling a vacant position with an officer who lacks language skills versus waiting to fill the position with an officer who is trained and fully proficient. Our interviews with State officials suggest that such staffing concerns particularly affect the EAP, NEA, and SCA bureaus. One geographic bureau official said that the bureau had lowered reading requirements for LDPs at one of its posts because of difficulties in filling the positions. Further, according to a 2014 State memorandum, the Office of Overseas Building Operations does not support LDPs for any of its employees, citing a critical staffing shortage. Moreover, a December 2010 memorandum from State’s M/DGHR acknowledged that the designation of LDPs is often influenced by staffing realities and stated that posts usually adjust language levels down on the basis of the likelihood of finding language-qualified bidders. Cost concerns. While guidance from State’s M/DGHR, including memorandums issued in December 2010 and April 2016, states that the department should assess its language needs in a “resource neutral” environment, geographic bureau and post officials said that the LDP review process is tempered by cost considerations. For example, a management official at a post where a super-hard language is spoken said that the substantial amount of time and money needed to train FSOs in hard and super-hard languages influences decisions regarding numbers of LDPs and proficiency levels requested. According to a 2013 State OIG report, the State OIG estimates that training students to the 3/3 level in easier world languages such as Spanish can cost $105,000; training in hard languages such as Russian can cost $180,000; and training in super- hard languages such as Chinese and Arabic can cost up to $480,000 per student. Students learning super-hard languages to the 3/3 level generally spend 1 year domestically at the FSI and then a second year at an overseas training facility. While, according to State officials, posts drive the LDP review process because they are best positioned to know their language needs, officials we interviewed—including officials at overseas posts—offered differing perspectives on whether posts’ assessment of these needs are sufficiently rigorous. Some post managers said that shifting the review from an annual to a triennial process represented an improvement, because the prior annual reviews were not taken seriously, and the 3- year cycle has allowed State to be more strategic in planning and allocating resources. Some post officials also said that the 3-year cycle is more structured and that the multiple levels of review and input have brought greater stability and consistency to posts’ request for LDPs. However, other officials at posts we visited said that State’s language designation process is insufficiently rigorous and systematic, describing it as ad hoc. Some of the geographic bureau and post officials we met with were unaware of State’s criteria for establishing LDP designations as outlined in the FAM. Remarks by some officials also suggest that posts tend to base LDP decisions on preexisting LDP numbers and levels. For example, some embassy managers said that they generally review the existing LDP numbers and levels and make minor adjustments. In addition, some geographic bureau officials said that they provide limited substantive review of posts’ submissions of LDP numbers and levels. Further, comments from post officials suggest that posts have generally applied a “blanket” approach in determining LDP proficiency requirements, despite State guidance that instructs posts to conduct more targeted assessments of their needs. State cables providing posts with guidance for the 2017 and 2014 LDP reviews stated that posts should not automatically assume that a 3/3 proficiency level is required for every LDP in a particular section or embassy and instructed posts to examine the specific language needs for each position. Post managers and staff we interviewed also said that language needs vary by position and portfolio within an embassy section. However, according to State data, most generalist LDPs are designated at a 3/3 level. In a 2013 report examining State’s LDP review process, the State OIG identified deficiencies in State’s process for developing language requirements. For example, the report noted that State’s oversight of LDPs is insufficient to identify over- or underdesignation of language requirements and that State does not review embassies’ and geographic bureaus’ language requirements “to facilitate consistent application of language designation criteria and appropriate distribution given U.S. policy priorities.” The report indicates that the lack of high-level review has led to anomalies, such as widely varying proficiency requirements for officers performing similar functions at different missions. Specifically, the OIG reported that State designated certain positions as LDPs for some European posts, such as France and Italy, but did not designate similar positions as LDPs in Haiti, Thailand, and Indonesia, where working conditions are more difficult and English language speakers are fewer. In response to an OIG recommendation to address this issue, State’s M/DGHR provided criteria to the geographic bureaus to use in the 2014 LDP review when determining whether language ability is necessary to advance U.S. foreign policy objectives. In October 2016, State headquarters sent out a cable to all posts, providing them with an updated set of criteria to be used in the 2017 LDP review. We discussed the concerns expressed by FSOs concerning the LDP process with State’s M/DGHR. State M/DGHR officials responded that the department has undertaken initiatives to align LDP levels more closely with policy and operational requirements and intends to incorporate these initiatives into its 2017 triennial LDP review process. For example, according to State M/DGHR officials, M/DGHR has encouraged a dialogue between the bureaus and their posts to ensure that their LDP submissions reflect operational requirements and policy priorities and has sent official messages to all posts and bureaus on the process and the need for rigorous review. The officials also noted that State’s M/DGHR has asked participants to designate their requests for LDPs as high, medium, and low priority, to encourage rigor in considering the real needs of posts and to avoid any implication that all LDPs are of equal importance. State’s 2011 “Strategic Plan for Foreign Language Capabilities” (foreign language strategic plan), which it issued partly in response to a recommendation in our 2009 report, outlines a number of efforts intended to meet its current and projected needs for foreign language proficiency. The strategic plan sets a goal of increasing the percentage of LDPs filled by fully qualified employees by an annual rate of 3 to 5 percent and estimates that 90 percent of LDPs will be filled by fully qualified employees by 2016 or 2017. The strategic plan presents these efforts in connection with six broad objectives. Some of the listed efforts, such as the Recruitment Language Program (RLP) and the Language Incentive Pay (LIP) program, predate the development of the strategic plan. As table 5 shows, in addition to outlining the efforts that State planned to implement for each of the six objectives, the foreign language strategic plan also identifies goals and performance measures associated with the objectives. According to information that State provided, State has taken steps to implement efforts addressing most of the six broad objectives identified in the foreign language strategic plan but has made limited progress in addressing others. According to information provided by State’s M/DGHR, as of October 2016, budgetary and operational pressures had precluded an expansion of the training complement (objective 1), and the prototype language training and assignment model described in the strategic plan remains under development (objective 3). However, State is implementing the following efforts to address the other four objectives: LDP reviews (objective 2). To improve the department’s language designation process, as discussed earlier, in 2010 State changed the frequency of the LDP review process from annual to triennial and has initiated its third triennial LDP review process, which it expects to complete in 2017. RLP (objective 4). Initiated in fiscal year 2004, the RLP aims to expand the number of candidates entering the Foreign Service with proficiency in languages in which State has current or projected deficits. To enhance the RLP, according to a State document, State has updated the list of recruitment languages to reflect those that are of critical importance to U.S. foreign policy, those in which posts are experiencing severe shortages or staffing gaps, and those that present specific recruiting and training challenges. According to State data, the percentage of entry-level officers hired through the RLP has varied from a peak of 40 percent (221 of 547 officers) in fiscal year 2011 to 5 percent (16 of 353 officers) in fiscal year 2016. LIP program (objective 5). To make language incentives more effective and maximize the impact of language and assignment policies, according to State’s M/DGHR, State reviewed the LIP in 2012, the first such review in over a decade, to clarify and streamline the program by aligning the designated languages (i.e., those eligible for incentives) with the department’s current needs and incentivizing employees to use and maintain proficiency in those languages. As a result of the review, State reduced the number of incentive languages from 52 to 34 to reflect the department’s highest strategic priorities. Also, according to information provided by State’s M/DGHR, FSI adjusts course offerings in priority languages, including some that are included in the LIP program, as needed, to address the department’s strategic planning and performance goals. According to State data, between 2010 and 2016 a total of 11,477 FSOs received LIP, amounting to $77.6 million. Foreign language proficiency requirement (objective 5). One of the mechanisms State uses to ensure a strong contingent of foreign language speakers is the inclusion of sustained professional language proficiency in the promotion precepts for Foreign Service generalists. According to FSOs and other officials we spoke with, this policy may be creating an incentive for FSOs to learn “world” languages, such as Spanish, which generally take 6 months to reach a 3/3, instead of super-hard languages, which take 2 years to reach the same level of proficiency. According to a 2013 State OIG report, promotion and tenure policies tied to language skills influence the number and level of LDP designations. An official from the OIG who worked on the 2013 report explained that the promotion policy may also contribute to the discrepancy in the numbers of LDPs with proficiency in world and super-hard languages as well as shortfalls in language-proficient FSOs to fill LDPs in certain priority languages. Some FSOs told us that taking 2 years to learn a super-hard language makes them less competitive for promotion, expressing a perception that State’s promotion system undervalues language training. However, State’s M/DGHR said that overall, the promotion system does not disadvantage FSOs who study hard or super-hard languages because time spent in language training extends their years of promotion eligibility. We discussed this issue with State’s M/DGHR and inquired whether a review of this policy had been conducted to determine its potential impact on learning super-hard languages. In response, State informed us that the language proficiency requirement for promotion, along with other related policies, is currently under review. Language-related technology (objective 6). We found that State’s FSI has implemented various language-related technologies to improve the language acquisition process, such as the Smart Notebook, which offers language instruction via the Internet, as well as language learning applications and technology-enabled classrooms with screen-sharing applications. FSI staff said that technology has improved the language acquisition process by allowing students to engage in lifelike scenarios in the classroom while learning a language, giving students access to lessons that were previously available only in language labs, and accommodating students’ schedules and needs. In addition, State officials told us that they are using technology to complement language skills at the operational level. For example, the embassy in China identified 48 positions for which it could adjust the speaking and reading level from a 3/3 to a 3+/2, in part because the “advent of sophisticated translation technologies enables officers to access information from written materials in multiple ways and on a scale never before possible.” A senior FSO in Mexico indicated that both reading and speaking are important but that the reading requirement could possibly be lowered, since translation technology can be used to assist with reading. FSOs in countries we visited generally indicated that they use online translation tools to translate documents. However, some FSOs reported that they could not rely exclusively on the translation provided by the online tool because it is generally not entirely accurate. Some said they use it as an initial step in translating documents, while others said they use it to translate documents only for their own use or when they need an immediate translation. More than 5 years after State developed and began implementing its foreign language strategic plan, we found no evidence that State had conducted a systematic and comprehensive evaluation of all the actions identified in the plan to determine their effects on language proficiency gaps. According to State’s evaluation policy, the department is committed to using performance management best practices, including evaluation, to achieve the most effective U.S. foreign policy outcomes and greater accountability. State’s evaluation policy defines evaluation as the systematic collection and analysis of information about the characteristics and outcomes of programs, management processes, and delivery systems as a basis for judgments, to improve effectiveness and inform decision makers about current and future activities. Also, according to federal internal control standards, internal controls should provide reasonable assurance that the objectives of an agency are being achieved to ensure the effectiveness and efficiency of operations, including the use of the agency’s resources. We asked State’s M/DGHR office whether it had conducted any evaluations of the effects of these efforts, including the RLP and the LIP program, on language proficiency. M/DGHR officials responded that they were unaware of any such evaluations but noted that the relatively small number of personnel involved in the programs made it difficult to conduct quantitative assessments. However, State officials indicated that after completion of the ongoing triennial LDP review, the Language Policy Working Group would review both RLP and LIP, but they did not provide details on the nature of the planned review. State reports annually to Congress on the levels of foreign language proficiency at overseas posts. In addition, State provides updates on foreign language proficiency gaps and efforts to address them in its annually updated Five Year Workforce Leadership and Succession Plan. The workforce plan for fiscal years 2016 through 2020 includes updates on the number of LDPs staffed worldwide; challenges in filling LDPs; and efforts outlined in, or implemented in response to, the foreign language strategic plan. For example, the workforce plan highlights the use of recruitment incentive languages to provide extra points on the hiring register of FSO candidates who speak and read proficiently in these languages and pass the assessment process, which increases their chance of entering the Foreign Service. However, our examination of these documents found no evidence that State has conducted a systematic and comprehensive evaluation of efforts to address each of the objectives in the strategic plan. Without systematic and comprehensive evaluations, consistent with State evaluation policy and federal internal control standards, State is unable to determine the effects of the efforts outlined in the strategic plan in addressing language proficiency shortfalls, particularly in hard and super-hard languages, and to take corrective actions. Since 2008, State has increased its levels of foreign language proficiency at overseas posts, strengthening its overall capacity to advance U.S. foreign policy and economic interests worldwide. Nonetheless, significant proficiency gaps in priority languages such as Arabic and Chinese may adversely affect State’s ability to fulfill its diplomatic responsibilities in regions of critical importance to U.S. foreign policy. Although State has implemented efforts to enhance foreign language proficiency, as outlined in its 2011 “Strategic Plan for Foreign-Language Capabilities,” it has not conducted a systematic and comprehensive evaluation of these efforts’ effectiveness. As a result, State cannot determine the extent to which these efforts have contributed to progress in increasing language proficiency worldwide and has limited information on which to base future investments of its resources. Accordingly, State cannot determine whether adjustments to the plan are needed to enhance State’s capacity to address increasingly complex economic and national security challenges overseas. To strengthen State’s ability to address persistent gaps in foreign language proficiency at overseas posts and make informed future resource investments, we recommend that the Secretary of State evaluate the effectiveness of efforts implemented under the “Strategic Plan for Foreign-Language Capabilities.” We provided a draft of this report for review and comment to State. We received written comments from State, which are reprinted in appendix III. State agreed with our recommendation and indicated that “the Department will develop a process to evaluate implementation of the 2011 Strategic Plan and future plans. The Department will report on results of the evaluation within one year.” State also provided technical comments, which we have incorporated throughout the report, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of State, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-8980, or CourtsM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In this report, we examine (1) the extent to which the Department of State (State) is meeting its foreign language proficiency requirements for overseas posts as well as the effects of language proficiency and any gaps in State’s ability to perform diplomatic duties, (2) State’s process for identifying overseas posts’ language proficiency needs and the extent to which the process addresses these reported needs, and (3) efforts State has taken to enhance foreign language proficiency and any effects of those efforts. To address these objectives, we analyzed data and reviewed documents provided by State, including relevant provisions of the Foreign Affairs Manual. We interviewed officers from State’s Bureaus of African Affairs, Consular Affairs, European and Eurasian Affairs, East Asian and Pacific Affairs, Near Eastern Affairs, South and Central Asian Affairs, Western Hemisphere Affairs, and Human Resources in Washington, D.C., as well as officials from the Foreign Service Institute in Arlington, Virginia. In addition, we interviewed officials at the U.S. embassies in Beijing, China; Cairo, Egypt; Seoul, South Korea; Mexico City, Mexico; and Moscow, Russia. We selected these countries to examine language issues related to Mandarin Chinese, Arabic, Korean, Spanish, and Russian. Our criteria for selecting these countries included (1) countries in which priority languages, as identified by State, are spoken; (2) the number of language-designated positions (LDP) in selected countries, including countries with a relatively low and high number of LDPs; (3) gaps in filling LDPs; (4) the difficulty of the languages spoken in selected countries; and (5) the diplomatic and economic significance of selected countries to the United States. While overseas, we met with embassy officials, including senior and junior-level Foreign Service officers within the embassies’ consular, economic, political, public affairs, security, and management sections. To examine the extent to which State is meeting its foreign language requirements, we obtained data from State’s Global Employee Management System database on all overseas LDPs and the language skills of the incumbents filling the positions as of September 30, 2016. We compared the incumbents’ reading and speaking scores with the reading and speaking levels required for the positions and determined that an incumbent met the requirements for the position only if his or her scores equaled or exceeded both the speaking and reading requirements. A limited number of positions are designated in two languages. We determined that the officer met the requirements of such positions if he or she met both the speaking and reading requirements for at least one of the designated languages. We also interviewed State officials responsible for compiling and maintaining these data and determined the data to be sufficiently reliable for identifying the number of LDPs filled by officers who met the requirements of the position. To assess the effects of language proficiency and any gaps in State’s ability to perform its diplomatic duties, we reviewed previous GAO reports as well as the December 2012 Accountability Review Board report on the attacks on the mission in Benghazi, Libya. We interviewed State officials in Washington, D.C., and at the overseas posts we visited. We also met with former senior State officials, including ambassadors and a former Director General of the Foreign Service and Director of Human Resources, to gain their insights on the consequences of language shortfalls at overseas missions. In addition, we conducted a literature review on the effects of language proficiency and any gaps in State’s ability to perform its diplomatic duties. To examine State’s process for identifying overseas posts’ language proficiency requirements and the extent to which the process addresses these reported needs, we reviewed previous GAO reports and State documents, such as memorandums and cables on the language- designation process. We also reviewed State’s Office of Inspector General’s (OIG) 2013 review of State’s process for establishing LDPs and interviewed State OIG officials. In addition, we interviewed State officials in Washington, D.C., and at overseas posts. To examine efforts State has taken to enhance foreign language proficiency and any effects of those actions, we reviewed State planning documents, including the State Department’s “Strategic Plan for Foreign Language Capabilities,” dated March 7, 2011, as well as the 2015 and 2016 versions of its Five Year Workforce and Leadership Succession Plan. We obtained information from State on steps it has taken to address key issues in the 2011 strategic plan. We compared steps State has taken to the objectives described in the “Strategic Plan for Foreign- Language Capabilities” and assessed whether they have been evaluated in accordance with State’s Evaluation Policy and federal internal control standards. We also reviewed State’s Report on Foreign Language Proficiency for Fiscal Year 2015 and its promotion policies. In addition, we interviewed State officials in Washington, D.C., and at overseas posts. We conducted this performance audit from February 2016 to March 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. According to the Department of State’s (State) Foreign Affairs Manual (FAM 221.2), operational need is the determining criterion for language- designated positions (LDP) , where language proficiency is essential, rather than merely helpful or convenient, to enhancing U.S. effectiveness abroad. The FAM also outlines the following criteria for consideration by responsible offices in designating LDPs: the necessity of using the language to execute successfully the requirements of the position; the frequency of daily use of the language; the fluency level of that engagement; the official designation of the language as the national language(s); the importance host-nation interlocutors attach to our speaking their language; the prevalence of another language a significant segment of the population speaks; the general level of English language penetration; the English language capabilities of the embassy’s locally employed staff in the relevant section; the professionalism and availability of interpretation/translation services; the prevalence of corruption and the need for language proficiency to ensure necessary oversight; the importance of being able to speak certain language(s) in public or at representational events; the availability of media in the language(s); the importance of monitoring social media in the local language; the level of literacy in the country; the prevalence of documents published in the language; whether speaking or reading the language, or both, would notably increase the efficiency and scope of the employee’s tasks or work portfolio; the variety of interactions required for the job (speeches, formal demarches, receptions, visa interviews, travel and engagement with population in rural communities, key segments of society, or minority groups); the importance of building a cadre of speakers of the language within the Foreign Service: Does the department need to develop employees for future assignment at higher levels of responsibility with these language skills? and the necessity for employees who occupy positions in sections (for example, security or management) where the need for foreign language skills is so innate to the job (e.g., the work involves regular contact with foreign nationals in the local native language) that the post needs at least one or more LDP per section. According to an October 2016 State cable, an additional primary criterion, beyond the criteria referenced in 13 FAM 221.2, is the importance of understanding the language to manage one’s personal security. The State cable also notes other factors that should be considered in the LDP review process, including the following: In identifying LDPs, bureaus are encouraged to keep in mind that designations may vary from the usual S -3/R-3 level, including asymmetric designations in which a mandated speaking proficiency may be higher than the reading proficiency (e.g., S-3/R-2, S-2/R-1, or even S-2/R-0). Bureaus should consider an asymmetric language designation and how it might affect employee productivity, personal security, and overall resource management. Bureau requests for modifications to the career development plan and language incentive pay are under consideration. Missions are encouraged to set LDP levels for speaking and reading based on the level of language proficiency skills needed to do the work. If job requirements call for either of two languages, bureaus should consider dual designations, with the preferred language listed first. If language proficiency is preferred but not essential, the position should be marked with speaking and reading requirements of 0/0 to designate it as a language-preferred position. This designation will help identify future resource needs and indicate when first- and second-tour language training could be beneficial. In addition to the contact named above, Godwin Agbara (Assistant Director), Francisco M. Enriquez (Analyst-in-Charge), Juan Pablo Avila- Tournut, Mark Dowling, Justin Fisher, Emily Gupta, and Reid Lowe made key contributions to this report.
Proficiency in foreign languages is a key skill for U.S. diplomats to advance U.S. interests overseas. GAO has issued several reports highlighting State's persistent foreign language shortfalls. In 2009, GAO recommended that State, to address these shortfalls, develop a strategic plan linking all of its efforts to meet its foreign language requirements. In response, in 2011 State issued its “Strategic Plan for Foreign Language Capabilities.” GAO was asked to build on its previous reviews of State's foreign language capabilities. In this report, GAO examines (1) the extent to which State is meeting its foreign language proficiency requirements for overseas posts as well as the effects of language proficiency and any gaps in State's ability to perform diplomatic duties, (2) State's process for identifying overseas posts' language proficiency needs and the extent to which the process addresses these reported needs, and (3) efforts State has taken to enhance foreign language proficiency and any effects of those efforts. GAO analyzed data on State's overseas language-designated positions; reviewed State strategic planning and policy documents; interviewed State officials; and visited overseas posts in China, Egypt, Korea, Mexico, and Russia. As of September 2016, 23 percent of overseas language-designated positions (LDP) were filled by Foreign Service officers (FSO) who did not meet the positions' language proficiency requirements. While this represents an 8-percentage-point improvement from 2008, the Department of State (State) still faces significant language proficiency gaps (see fig.). Regionally, the greatest gaps were in the Near East (37 percent), Africa (34 percent), and South and Central Asia (31 percent). According to FSOs we interviewed, language proficiency gaps have, in some cases, affected State's ability to properly adjudicate visa applications; effectively communicate with foreign audiences, address security concerns, and perform other critical diplomatic duties. State reviews overseas posts' language needs every 3 years, but the extent to which the reviews' outcomes address these needs is unclear. State's policies indicate that operational need should determine the designation of positions as LDPs and required proficiency levels. However, views expressed by geographic bureau officials and FSOs whom GAO met at overseas posts suggest that other factors, such as staffing and cost concerns, influence State's decisions about LDP designations and proficiency requirements. State Human Resources officials noted that State is taking steps to better align its LDP policies with its operational needs. State has implemented most actions described in its 2011 “Strategic Plan for Foreign Language Capabilities” but has not evaluated the effects of these actions on language proficiency at overseas posts. According to State’s evaluation policy, the department is committed to using performance management, including evaluation, to achieve the most effective foreign policy outcomes and greater accountability. Actions State has implemented under the plan include reviewing the language requirements of overseas posts every 3 years; offering recruitment incentives for personnel with proficiency in critically important languages; providing language incentive pay only for languages that reflect the department’s highest strategic priorities; and using technology to strengthen and develop new approaches for language training and to complement FSOs’ language skills. However, more than 5 years after it began implementing its strategic plan, State has not systematically evaluated the results of these efforts. As a result, State cannot determine the extent to which these efforts contribute to progress in increasing language proficiency worldwide and reducing proficiency gaps. GAO recommends that the Secretary of State evaluate the effectiveness of efforts implemented under the “Strategic Plan for Foreign Language Capabilities.” State agreed with our recommendation.
Title II of the Social Security Act, as amended, establishes the Old-Age, Survivors, and Disability Insurance (OASDI) program, which is generally known as Social Security. The program provides cash benefits to retired and disabled workers and their eligible dependents and survivors. Congress designed Social Security benefits with an implicit focus on replacing lost wages. However, Social Security is not meant to be the sole source of retirement income; rather it forms a foundation for individuals to build upon. The program is financed on a modified pay-as-you-go basis in which payroll tax contributions of those currently working are largely transferred to current beneficiaries. Current beneficiaries include insured workers who are entitled to retirement or disability benefits, and their eligible dependents, as well as eligible survivors of deceased insured workers. The program’s benefit structure is progressive, that is, it provides greater insurance protection relative to contributions for earners with lower wages than for high-wage earners. Workers qualify for benefits by earning Social Security credits when they work and pay Social Security taxes; they and their employers pay payroll taxes on those earnings. In 2005, approximately 159 million people had earnings covered by Social Security, and 48 million people received approximately $521 billion in OASDI benefits. Currently, the Social Security program collects more in taxes than it pays out in benefits. However, because of changing demographics, this situation will reverse itself, with the annual cash surplus beginning to decline in 2009 and turning negative in 2017. In addition, all of the accumulated Treasury obligations held by the trust funds are expected to be exhausted by 2040. Social Security’s long-term financing shortfall stems primarily from the fact that people are living longer and labor force growth has slowed. As a result, the number of workers paying into the system for each beneficiary has been falling and is projected to decline from 3.3 today to about 2 by 2040. The projected long-term insolvency of the OASDI program necessitates system reform to restore its long-term solvency and assure its sustainability. Restoring solvency and assuring sustainability for the long term requires that either Social Security gets additional income (revenue increases), reduces costs (benefit reductions), or undertakes some combination of the two. To evaluate reform proposals, we have suggested that policy makers should consider three basic criteria:1. the extent to which the proposal achieves sustainable solvency and how the proposal would affect the economy and the federal budget; 2. the balance struck between the goals of individual equity (rates of return on individual contributions) and income adequacy (level and certainty of monthly benefits); and 3. how readily such changes could be implemented, administered, and explained to the public. Moreover, reform proposals should be evaluated as packages that strike a balance among the individual elements of the proposal and the interactions among these elements. The overall evaluation of any particular reform proposal depends on the weight individual policy makers place on each of the above criteria. Changing the indexing used by the OASDI program could be used to increase income or reduce costs. Indexing provides a form of regular adjustment of revenues or benefits that is pegged to a particular economic, demographic, or actuarial variable. An advantage of such indexing approaches is that they take some of the “politics” out of the system, allowing the system to move toward some agreed-upon objective; they may also be administratively simple. However, this “automatic pilot” aspect of indexing poses a challenge, as it may make policy makers hesitant to enact changes, even when problems arise. While Social Security did not use automatic indexing initially, it is now a key feature of the program’s design, as well as a central element of many reform proposals. Under the current program, benefits for new beneficiaries are computed using wage indexing, benefits for existing beneficiaries are adjusted using price indexing, and on the revenue side, the cap on the amount of earnings subject to the payroll tax is also adjusted using wage indexing. Reform proposals have included provisions for modifying each of these indexing features. Before the 1970s, the Social Security program did not use indexing to adjust benefits or taxes automatically. For both new and existing beneficiaries, benefit rates increased only when Congress voted to raise them. Benefit levels, when adjusted for inflation, fell and then jumped up with ad hoc increases, and these fluctuations were dramatic at times. Similarly, Congress made only ad hoc changes to the tax rate and the cap on the amount of workers’ earnings that were subject to the payroll tax, which is also known as the maximum taxable earnings level. Adjusted for inflation, the maximum taxable earnings level also fluctuated dramatically, and as a result, the proportion of all wages subject to the payroll tax also fluctuated. (See app. II for more detail.) For the first time, the 1972 amendments provided for automatic indexing. They provided for automatically increasing the maximum taxable earnings level based on increases in average earnings, and this approach is still in use today. However, the 1972 amendments provided an indexing approach for benefits that became widely viewed as flawed. In particular, the indexing approach in the 1972 amendments resulted in (1) a “double- indexing” of benefits to inflation for new beneficiaries though not for existing ones; (2) a form of “bracket creep” based on the structure of the benefit formula that slowed benefit growth as earnings increased over time, which offset the double indexing to some degree; and (3) instability of program costs that was driven by the interaction of price and wage growth in benefit calculations. (See app. II for more detail.) Within a few years, problems with the 1972 amendments became apparent. Benefits were growing far faster than anticipated, especially since wage and price growth varied dramatically from previous historical experience. Addressing the instability of this indexing approach became a focus of policy makers’ efforts to come up with a new approach. As a 1977 paper on the problem noted, “Clearly, it is a system that needs to be brought under greater control, so that the behavior of retirement benefits over time will stop reflecting the chance interaction of certain economic variables.” The 1977 amendments instituted a new approach to indexing benefits that remains in use today. The experience with the 1972 amendments and double indexing made clear the need to index benefits differently for new and existing beneficiaries, which was referred to as “decoupling” benefits. Indexing now applies to several distinct steps of the benefit computation process, including (1) indexing lifetime earnings for each worker to wage growth, (2) indexing the benefit formula for new beneficiaries to wage growth, and (3) indexing benefits for existing beneficiaries to price inflation. Under this approach, benefit calculations for new beneficiaries are indexed differently than for existing beneficiaries, and earnings replacement rates have been fairly stable. The cap on taxable earnings is still indexed to wage growth as specified by the 1972 amendments. Social Security benefits are designed to partially replace earnings that workers lose when they retire, become disabled, or die. As a result, the first step of the benefit formula calculates a worker’s average indexed monthly earnings (AIME), which is based on the worker’s lifetime history of earnings covered by Social Security taxes. The formula adjusts these lifetime earnings by indexing them to changes in average wages. Indexing the earnings to changes in wage levels ensures that the same relative value is accorded to each year’s earnings, no matter when they were earned. For example, consider a worker who earned $5,000 in 1965 and $40,000 in 2000. The worker’s earnings increased by eight times, but much of that increase reflected changes in the average wage level in the economy, which increased by about seven times (690 percent) over the same period. The growth in average wages in turn partially reflects price inflation; however, wages may grow faster or slower than prices in any given year. Indexed to reflect wage growth, the $5,000 would become roughly $35,000, giving it greater weight in computing average earnings over time and making it more comparable to 2000 wage levels. Once the AIME is determined, it is applied to the formula used to calculate the worker’s primary insurance amount (PIA). This formula applies different earnings replacement factors to different portions of the worker’s average earnings. The different replacement factors make the formula progressive, meaning that the formula replaces a larger portion of earnings for lower earners than for higher earners. For workers who become eligible for benefits in 2006, the PIA equals 90 percent of the first $656 dollars of AIME plus 32 percent of the next $3,299 dollars of AIME plus 15 percent of AIME above $3,955. For workers who do not collect benefits until after the year they first become eligible, the PIA is adjusted to reflect any COLAs since they became eligible. The PIA is used in turn to determine benefits for new beneficiaries and all types of benefits payable on the basis of an individual’s earnings record. To determine the actual monthly benefit, adjustments are made reflecting various other provisions, such as those relating to early or delayed retirement, type of beneficiary, and maximum family benefit amounts. Figure 1 illustrates how the PIA formula works. The dollar values in the formula that indicate where the different replacement factors apply are called bendpoints. These bendpoints ($656 and $3,955) are indexed to the change in average wages, while the replacement factors of 90, 32, and 15 percent are held constant. In contrast, under the 1972 amendments, the bendpoints were held constant and the replacement factors were indexed. (See app. II.) Indexing the bendpoints and holding replacement factors constant prevents bracket creep and keeps the resulting earnings replacement rates relatively level across birth years. Indexing the benefit formula in this way helps benefits for new retirees keep pace with wage growth, which reflects increases in the standard of living. Figure 2, which shows earnings replacement rates for successive groups of illustrative workers, illustrates the program’s history with indexing initial benefits. Replacement rates declined before the first benefit increases were enacted in 1950 and then rose sharply as a result of those increases. From 1950 until the early 1970s, replacement rates fluctuated noticeably more from year to year than over other periods; this pattern reflects the ad hoc nature of benefit increases over that period. Between 1974 and 1979, replacement rates grew rapidly for new beneficiaries, reflecting the double indexing of the 1972 amendments. The 1977 amendments corrected for the unintended growth in benefits from double indexing, and replacement rates declined rapidly as a result. This pattern of increasing and then declining benefit levels is known as the notch. Finally, replacement rates have been considerably more stable since the 1977 amendments took effect, a fact that has helped to stabilize program costs. (See app. II.) After initial benefits have been set for the first year of entitlement, benefits in subsequent years increase with a COLA designed to keep pace with inflation and thereby help to maintain the purchasing power of those benefits. The COLA is based on the consumer price index (CPI), in contrast to the indexing of lifetime earnings and initial benefits, which are based on the national average wage index. The cap on taxable earnings increases each year to keep pace with changes in average wages. As a result, in combination with a constant tax rate, total program revenues tend to keep pace with wage growth and therefore also with benefits to some degree. In 2006, the cap is set at $94,200. As the distribution of earnings in the economy changes, the percentage of total earnings that fall below the cap can also change. (See app. II.) Table 1 summarizes the various indexing and automatic adjustment approaches that affect most workers and beneficiaries under the current program. Various reform proposals have suggested changes to most of the indexing features of the current Social Security system. Some proposals would use alternative indexes for initial benefits in order to slow their growth. Other proposals would take the same approach but would limit benefit reductions on workers with lower earnings. Some propose modifying the COLA in the belief that the CPI overstates the rate of inflation. Still others propose indexing revenue provisions in new ways. Changes to the indexing of Social Security’s initial benefits could be implemented by changing the indexing of lifetime earnings or the PIA formula’s bendpoints. However, they could also be implemented by adjusting the PIA formula’s replacement factors, even though these factors are not now indexed. Under this approach, which is used in this report, the replacement factors are typically multiplied by a number that reflects the index being used. The replacement factors would be adjusted for each year in which benefits start, beginning with some future year. So such changes would not affect current beneficiaries. Indexing the replacement factors would reduce benefits at the same proportional rate across income levels, while changing the indexing of lifetime earnings or the bendpoints could alter the distribution of benefits across income levels. Recent reform proposals, as described by the Social Security Administration’s (SSA) Office of the Chief Actuary in its evaluations, generally implement indexing changes as adjustments to the PIA formula’s replacement factors. Two indexing approaches—to reflect changes in the CPI or increasing longevity—have been proposed as alternatives to the average wage index for calculating initial benefits. Proponents of using CPI indexing for initial benefit calculations generally offer the rationale that wage indexing has never been fiscally sustainable and CPI indexing would slow the growth of benefits to an affordable level while maintaining the purchasing power of benefits. They say that maintaining the purchasing power of benefits should be the program’s goal, as opposed to maintaining relative standards of living across age groups (that is, earnings replacement rates), which the current benefit formula accomplishes. Proponents of longevity indexing offer the rationale that increasing longevity is a key reason for the system’s long-term insolvency. Since people are living longer on average, and are expected to continue to do so in the future, they will therefore collect benefits for more years on average. Using an index that reflects changes in life expectancy would maintain relatively comparable levels of lifetime benefits across birth years and thereby promote intergenerational equity. Also, longevity indexing could encourage people to work longer. Some indexing proposals accept the need to slow the growth of initial benefits in general but seek to protect benefit levels for the lowest earnings levels, consistent with the program’s goal of helping ensure income adequacy. Such proposals would modify how a new index would be applied to the formula for initial benefits so that the formula is still wage-indexed below a certain earnings level. As a result, they would maintain benefits promised under the current program for those with earnings below that level such as, for example, those in the bottom 30 percent of the earnings distribution. Such an approach has been called progressive price indexing. A few proposals would alter the COLA used to adjust benefits for current retirees. Some proposals respond to methodological concerns that have been raised about how the CPI is calculated and would adjust the COLA in the interest of accuracy. In general, such changes would slightly slow the growth of the program’s benefit costs. However, other proposals call for creating a new CPI for older Americans (CPI-E) specifically tailored to reflect how inflation affects the elderly population and using the CPI-E for computing Social Security’s COLA. Depending on its construction, such a change could increase the program’s benefit costs. Some proposals would index revenues in new ways. Some would apply a longevity index to payroll tax rates, again focused on the fact that increasing life expectancy is a primary source of the program’s insolvency. Proponents of indexing tax rates feel that benefits are already fairly modest, so the adjustment for longevity should not come entirely from benefit reductions. Other proposals would institute other types of automatic revenue adjustments. Some would raise the maximum taxable earnings level gradually until some percentage of total earnings are covered and then maintain that percentage into the future. Implicitly, such proposals reflect a desire to hold constant the percentage of earnings subject to the payroll tax. Still another proposal would provide for automatically increasing the tax rate when the ratio of trust fund assets to annual program costs is projected to fall. Table 2 summarizes the various indexing and automatic adjustment approaches that reform proposals have contained. Faced with adverse demographic trends, many countries have enacted reforms in recent years to improve the long-term fiscal sustainability of their national pension systems. New indexing methods now appear in a variety of forms around the world in earnings-related national pension systems. In general, they seek to contain pension costs associated with population aging. Some indexing methods affect both current and future retirees. A number of reforms have focused on methods that primarily adjust benefits rather than taxes to address the fiscal solvency of national pension systems. There are two main reasons for this. First, contribution rates abroad are generally high already, making it politically difficult to raise them much further. For example, while in the United States total employer-employee Social Security contribution rates are 12.4 percent of taxable earnings, they are above 16 percent in Belgium and France, more than 18 percent in Sweden and Germany, above 25 percent in the Netherlands and the Czech Republic, and over 30 percent in Italy. In fact, some countries have stipulated a ceiling on employee contribution rates in order to reassure the young—or current contributors—that the burden would be shared among generations. For example, Japan settled, with the 2004 Reform Law, its pension premium rates for the next 100 years with an increase of 0.35 percent per year until 2017, at which time premium levels are to be fixed at 18.3 percent of covered wages. Similarly, Canada chose to raise its combined employer-employee contribution rate more quickly than previously scheduled, from 5.6 percent to 9.9 percent between 1997 and 2003, and maintain it there until the end of the 75-year projection period. This increase is meant to help Canada’s pension system build a large reserve fund and spread the costs of financial sustainability across generations. Germany’s recent reforms set the workers’ contribution rate at 20 percent until 2020 and at 22 percent from 2020 to 2030. Second, increasing employee contribution rates without significantly reducing benefit levels will tend to make continued employment less attractive compared to retirement. In the context of population aging and fiscally stressed national pension systems facing many countries, reform measures seek to do the opposite: encourage people to remain in the labor force longer to enhance the fiscal solvency of pension programs. Contribution rates that become too high are not likely to provide sufficient incentives to continue work. One commonly used means of reducing, or containing the growth of, promised benefits involves changing the method used to compute initial benefits. For example, France, Belgium, and South Korea now adjust past earnings in line with price growth rather than wage growth to determine the initial pension benefits of new retirees. In general, this shift to price indexation tends to significantly lower benefits relative to earnings, as over long periods prices tend to grow more slowly than wages. Because of compounding, the effect of such a change is larger when benefits are based on earnings over a long period than when they reflect only the last few years of work, as in pension plans with benefits based on final salaries. In fact, the OECD estimates that, in the case of a full-career worker with 45 years of earnings, price indexation can lead to benefits 40 percent lower than with wage indexation. In contrast to full price indexing, some nations use an index that is a mix of price growth and wage growth, which tends to produce higher benefits than those calculated using price indexation only, then adjust the relative weights of the two to cover program costs. Finland, for example, changed its indexation of initial benefits from 50 percent prices and 50 percent wages to 80 percent and 20 percent, respectively. Similarly, Portugal’s index combines 75 percent price growth and 25 percent wage growth. A few countries have moved away from wage indexing but without necessarily adopting price indexation. Sweden, for instance, uses an index that reflects per capita wage growth to compute initial benefits, provided the system is in fiscal balance. However, when the system’s obligations exceed its assets, a “brake” is applied automatically that allows the indexation to be temporarily abandoned. This automatic balancing mechanism (ABM) ensures that the pension system remains financially stable. In Germany and Japan, recent reforms changed benefit indexation from a gross-wage base to a net-wage base—i.e., gross wages minus contributions. In Italy, workers’ benefit accounts rise in line with gross domestic product (GDP) growth so both the changes in the size of the labor force and in productivity dictate benefit levels. Another approach countries have used is adding a longevity index to the formula determining pension payments. In Sweden, Poland, and Italy, for example, remaining life expectancy at the time of retirement inversely affects benefit levels. Thus, as life spans gradually increase, successive cohorts of retirees get smaller benefit payments unless they choose to begin receiving them later in life than those who retired before them. Also, people who retire earlier than their peers in a given cohort get significantly lower benefits throughout their remaining life than those who retire later. Longevity indexing helps ensure that improvements in life expectancy do not strain the system financially. Germany, on the other hand, now uses a sustainability factor that links initial benefits to the system’s dependency ratio—i.e., the number of people drawing benefits relative to the number paying into the system. This dependency ratio captures variations in fertility, longevity, and immigration, and consequently makes the pension system self-stabilizing. For example, higher fertility and immigration, which raise labor force growth, will, other things equal, improve the dependency ratio, leading to higher pension benefits, while higher longevity or life expectancy will increase the dependency ratio, and hence cause benefits to decline. In some of the countries we studied, changes in indexing methods affect both current and future retirees. In Japan, for example, post-retirement benefits were indexed to wages net of taxes before 2000. However, reforms enacted that year altered the formula by linking post-retirement benefits to prices. As a result, retirees saw their subsequent benefits rise at a much slower pace. The 2004 reforms reduced retirees’ purchasing power further by introducing a negative “automatic adjustment indexation” to the formula. With this provision, post-retirement benefits increase in line with prices minus the adjustment rate, currently fixed at 0.9 percent until about 2023. This rate is the sum of two demographic factors: the decline in the number of people contributing to the pension program (projected at 0.6 percent) plus the increase in the number of years people collect pensions (projected at 0.3 percent). This negative adjustment also enters the formula determining the benefit of new recipients as past earnings are indexed to net wages minus the same 0.9 percent adjustment rate. Sweden’s ABM modifies both the retirement accounts of workers—or future retirees—and the benefits paid to current pensioners. As explained earlier, this mechanism is triggered whenever system assets fall short of system liabilities. Moreover, post-retirement benefits in Sweden are indexed each year to an economic factor equal to prices plus the average rate of real wage increase minus 1.6 percent, which is the projected real long-term growth in wages. As a result, if average real wages grow annually at 1.6 percent, post-retirement benefits are adjusted for price increases. On the other hand, if real wage growth falls below 1.6 percent, benefits do not keep up with prices, leading to a decline in retiree purchasing power. Germany’s sustainability factor affects those already retired, as it is included in the formula that adjusts their benefits each year. If, as projected, the number of contributors falls relative to that of pensioners, increasing the dependency ratio, all benefits are adjusted downward, so all cohorts share the burden of adverse demographic trends. This intergenerational burden sharing is also apparent in the indexation of all benefits to net wages—wages minus contributions, which affect workers and pensioners alike. Thus an increase in contributions, everything else equal, lowers both initial benefits and benefits already being paid. Table 3 summarizes relevant characteristics of earnings-related public pension programs in selected countries. In the U.S. Social Security program, indexing can have different effects on the distribution of benefits and on the relationship between contributions and benefits, depending on how it is applied to benefits or taxes. There are a variety of proposals that would change the current indexing of initial benefits, including a move to the CPI, to longevity or mortality measures, or to the dependency ratio. When the index is implemented through the benefit formula, each will have a proportional effect, with constant percentage changes at all earnings levels, on the distribution of benefits (i.e., the progressivity of the current system is unchanged). However, indexing provisions can be modified to achieve other distributional effects. For example, so-called progressive indexing applies different indexes at different earnings levels in a manner that seeks to protect the benefits of low-income workers. Indexing payroll tax rates would also have distributional effects. Such changes maintain existing benefit levels but affect equity measures like the ratio of benefits to contributions across age cohorts, with younger cohorts having lower ratios because they receive lower benefits relative to their contributions. Finally, proposals that modify the indexing of COLAs for existing beneficiaries have important and adverse distributional effects for groups that have longer life expectancies, such as women and highly educated workers, because such proposals would typically reduce future benefits, and this effect compounds over time. In addition, disabled worker beneficiaries, especially those who receive benefits for many years, would also experience lower benefits. There are a variety of proposals that would change the current indexing of initial benefits from the growth in average wages. These include a move to a measure of the change in prices like the CPI, to longevity measures that seek to capture the growth in population life expectancies, or to the dependency ratio that measures changes in the number of retirees compared to the workforce. We analyzed three indexing scenarios; the dependency ratio index, which links the growth of initial benefits to changes in the dependency ratio, the ratio of the number of retirees to workers; the CPI index, which links the growth of initial benefits to changes in the CPI; and the mortality index, which links the growth of initial benefits to changes in life expectancy to maintain a constant life expectancy at the normal retirement age. Figure 3 illustrates the projected distribution of benefits for workers born in 1985 under three different indexing scenarios (on the left side of the figure) and under a so-called benefit reduction benchmark that reduces benefits just enough to achieve program solvency over a 75-year projection period (on the far right). Median benefits under the dependency ratio index and the CPI index are lower than the median benefit for the benchmark; they reduce benefits more than is needed to achieve 75-year solvency. In contrast, the mortality index has a higher median benefit level than the benchmark, so without further modifications, it would not achieve 75-year solvency. Regardless of the index used to modify initial benefits, most proposals apply the new index in a way that has proportional effects on the distribution of benefits. Thus, benefits at all levels will be affected by the same percentage reduction, for example, 5 percent, regardless of earnings. The left half of figure 3 illustrates this proportionality in terms of monthly benefits. While the level of benefits differs, the distribution of benefits for each scenario has a similar structure. However, the range of each distribution varies by the difference in the size of the proportional reduction. A larger proportional reduction—the dependency ratio index— will result in a distribution with a similar structure, compared to promised benefits. However, each individual’s benefits are reduced by a constant percentage; therefore, the range of the distribution, the difference between benefits in the 25th and 75th percentile, would be smaller, compared to promised benefits. This proportional reduction in benefits is also illustrated in figure 4, which compares the currently scheduled or promised benefit formula with our three alternative indexing scenarios. Under each scenario, the line depicting scheduled benefits is lowered, by equal percentages at each AIME amount, by the difference between the growth in covered wages and the new index. Each indexing scenario maintains the shape of the current benefit formula; thus the progressivity of the system is maintained, but the line for each scenario is lower than scheduled benefits, which would affect the adequacy of benefits. The proportional effects of indexing are best illustrated by adjusting, or scaling, each index to achieve comparable levels of solvency over 75 years. Thus, for those indexes that do not by themselves achieve solvency, the benefit reductions are increased until solvency is achieved; for those that are more than solvent, the benefit reductions are decreased until solvency is achieved but not exceeded. The right half of figure 3 shows the distribution of monthly benefits for each of the scaled indexing scenarios and the benchmark scenario. Once the different indexing scenarios are scaled to achieve solvency, the distribution of benefits for each scenario is almost identical in terms of the level of benefits. Differences in the distributions deal with the timing associated with implementing the changes. Scaling the indexing scenarios also reveals that the shape of the distributions is the same. The distributions of monthly benefits for the indexing scenarios are also very similar to the distribution of benefits generated under the benefit reduction benchmark. Therefore, changes to the benefit formula, applied through the replacement factors, will have similar results regardless of whether the change is an indexing change or a straight benefit reduction, because of the proportional effect of the change. Indexing could also be modified to achieve other distributional goals. For example, so-called progressive indexing, or the use of different indexes— such as prices and wages—at various earnings levels, has been proposed as a way of changing the indexing while protecting the benefits of low- income workers. Thus, under progressive price indexing, those individuals with indexed lifetime earnings below a certain point would still have their initial benefits adjusted by wage indexing; those individuals with earnings above that level would be subject to a combination of wage and price indexing on a sliding scale, with those individuals with the highest lifetime earnings having their benefits adjusted completely by price indexing. The effect that progressive price indexing would have on the benefit formula can be seen in figure 5, where the CPI indexing scenario is compared to a progressive CPI indexing scenario and to benefits promised under the current program formula. Many lower-income individuals would do better under the progressive application of the CPI index than under the CPI indexing alone. However, a progressive application of CPI indexing does not by itself achieve 75-year solvency, and further changes would be necessary to do so. Figure 6 shows what happens to the benefit formula when each of these indexing scenarios is scaled to achieve comparable levels of solvency over 75 years. Under progressive price indexing, to protect the benefits of low-income workers, the indexing to prices at higher earnings levels begins to flatten out benefits, causing the line in figure 6 to plateau. Thus, under this scenario, most individuals with earnings above a certain level would receive about the same level of benefits regardless of income—in the case of figure 6, a retiree with average indexed monthly earnings of $2,000 would receive a similar benefit level as someone with average indexed monthly earnings of $7,000. Since progressive price indexing would change the shape of the benefit formula, making it more progressive, it would reduce individual equity for higher earners, as they would receive much lower benefits relative to their contributions. While proposals that have suggested progressive indexing have focused on using prices, any index can be adjusted to achieve the desired level of progressivity, and the results will likely be similar. However, to the extent that wages grow faster than the new index over a long period of time, the benefit formula will eventually flatten out and all individuals above a certain income level would receive the same level of benefits. Indexing changes could also be applied to program financing. Under the current structure of the system, one way this could be accomplished is by indexing the Social Security payroll tax rate. As with indexing benefits, the payroll tax rate could be indexed to any economic or demographic variable. Under the tax scenarios presented, only the indexing of taxes would change, so promised benefits would be maintained. However, workers would be paying more in payroll taxes, which, like any tax change, could affect work, saving, and investment decisions. While benefit levels would be higher under tax increase scenarios, as compared to benefit reduction scenarios, the timing of the tax changes matters, just as it did with benefit changes. Since benefits would be unchanged in the tax-increase-only scenarios, we use benefit-to-tax ratios to compare the effects of different tax increase scenarios. Benefit-to-tax ratios compare the present value of Social Security lifetime benefits with the present value of lifetime Social Security taxes. The benefit-to-tax ratio is an equity measure that focuses on whether, over their lifetimes, beneficiaries can expect to receive a fair return on their contributions or get their “money’s worth” from the system. With benefits unchanged in the tax increase scenarios, the benefit-to-tax ratios would vary across scenarios because of differences in the timing of tax increases. To illustrate the effects of the timing of a change in tax rates, figure 7 shows the benefit-to-tax ratios, for four different birth cohorts, for two tax increase scenarios: (1) the dependency ratio tax indexing scenario scaled to achieve 75-year solvency and (2) our tax increase benchmark scenario that increases taxes just enough to achieve program solvency over a 75- year projection period. By raising payroll taxes once and immediately, the tax increase benchmark would spread the tax burden more evenly across generations. This is seen in figure 7, where the benefit-to-tax ratios are fairly stable across cohorts for this scenario. The dependency ratio tax indexing scenario would increase the tax rate annually, in this case with changes in the dependency ratio. Under this scenario, later cohorts would face a higher tax rate and thus bear more of the tax burden, compared to earlier cohorts. This would result in declining benefit-to-tax ratios across cohorts, with later generations receiving relatively less compared to their contributions. Indexing changes can also be applied to the COLA used to adjust existing benefits. Under the current structure of the program, benefits for existing beneficiaries are adjusted annually in line with changes in the CPI. The COLA helps to maintain the purchasing power of benefits for current retirees. Some proposals, under the premise that the current CPI overstates the rate of price inflation because of methodological issues associated with how the CPI is calculated, would alter the COLA. Figure 8 shows the difference in benefit growth over time under the current COLA and two alternatives: growing at rate of CPI minus 0.22 and growing at rate of CPI minus 1. Changes to the COLA would also have adequacy implications. After 20 years, benefits growing at the rate of the CPI minus 0.22 would slow the growth of benefits by about 4 percent below the level given by the current COLA and growing at the rate of the CPI minus 1 by about 17 percent. This slower benefit growth would improve the finances of the system, but would also alter the distribution of benefits, particularly for some subpopulations. Since changes to the COLA compound over time, those most affected are those with longer life expectancies, for example, women, as they would have the biggest decrease in lifetime benefits as they tend to receive benefits over more years. In addition, as education is correlated with greater life expectancy, highly educated workers would also experience a significant benefit decrease. There could also be a potentially large adverse effect on the benefits paid to disabled beneficiaries, especially among those who become disabled at younger ages and receive benefits for many years. These beneficiaries could have a large decrease in lifetime benefits. Reducing the COLA would also have equity implications. Since the COLA is applied to all beneficiaries, reductions in the COLA would lower the return on contributions for all beneficiaries. However, the magnitude of the effect will vary across subpopulations, similar to its effect on adequacy. Those individuals who have the biggest decrease in lifetime benefits will have the biggest decrease in individual equity. While these individuals have a large decrease in equity, they would still receive higher lifetime benefits since they live longer and collect benefits over more years. Individuals with shorter life expectancies will experience a decrease in equity, but they will fare comparably better than other groups that live longer, since their lifetime benefits will decrease much less. Therefore, men, African-Americans, low earners, and less educated individuals would experience a much smaller decrease in equity compared to their counterparts. Indexing raises other important considerations about the program’s role, the stability of the variables underlying the index, and the treatment of Disability Insurance (DI) beneficiaries. The choice of the index implies certain assumptions about the appropriate level of benefits and taxes for the program. Thus, if the current indexing of initial benefits was changed to price growth, there is an implication that the appropriate level of benefits is one that maintains purchasing power over time rather than the current approach that maintains a relative standard of living across age groups (i.e., replacement rates). The solvency effects of an index are predicated upon the relative stability and historical trends of the underlying economic or demographic relationships implied by the index. For example, the 1970s were a period of much instability, in which actual inflation rates and earnings growth diverged markedly from past experience, with the result that benefits unexpectedly grew much faster than expected. Finally, since the benefit formulas for the Old-Age and Survivors Insurance (OASI) and DI programs are linked, an important consideration of any indexing proposal is its effect on the benefits provided to disabled workers. Disabled worker beneficiaries typically become entitled to benefits much sooner than retired workers and under different eligibility criteria. As with other ways to change benefits, an index that is designed to improve solvency by adjusting retirement benefits may result in large reductions to disabled workers, who often have fewer options to obtain additional income from other sources. The choice of an index suggests certain assumptions about the appropriate level of benefits and the overall goal of the program. The current indexing of initial benefits to wage growth implies that the appropriate level of benefits is one that maintains replacement rates across birth years. In turn, maintaining replacement rates implies a relative standard of adequacy and an assumption that initial benefits should reflect the prevailing standard of living at the time of retirement. In contrast, changing the current indexing of initial benefits to price growth implies that the appropriate level of benefits is one that maintains purchasing power. In turn, maintaining purchasing power implies an absolute standard of adequacy and an assumption that initial benefits should reflect a fixed notion of adequacy regardless of improvements in the standard of living. Also, any index that does not maintain purchasing power results in workers born in one year receiving higher benefits than workers with similar earnings born 1 year later. This would occur with any benefit change that would reduce currently promised benefits more than price indexing initial benefits would, since price indexing maintains the purchasing power of initial benefits. In the case of longevity indexing, if the growth of initial benefits were indexed to life expectancy, then this implies that the increased costs of benefits that stem from increasing life expectancy should be borne by all future beneficiaries, even if society has become richer. Therefore, the desired outcome, in terms of initial benefit levels at the time of retirement, should drive the choice of an index. The current indexing of existing benefits with the COLA implies that maintaining the purchasing power of benefits for current retirees is the appropriate level of benefits. Revising the COLA to reflect a more accurate calculation of the CPI retains this assumption. However, adjusting the COLA in a way that does not keep pace with the CPI would change that assumption and imply a view that the costs of reform should be shared by current as well as future retirees. Similarly, on the revenue side, the program currently uses a constant tax rate, which maintains the same proportion of taxes for all workers earning less than the maximum taxable earnings level. Applying a life expectancy index to payroll tax rates suggests that the appropriate level of taxes is one that prefunds the additional retirement years increased life expectancy will bestow on current workers, but also that the appropriate level of benefits is one that maintains replacement rates, as benefits are unchanged. Indexing raises other considerations about the stability of the underlying relationships between the economic and demographic variables captured by the index. The choice of an index includes issues of risk and methodology. Some indexes could be based on economic variables that are volatile, introducing instability because the index generates wide swings in benefits or taxes. In other cases, long-standing economic or demographic relationships premised by the index could change, resulting in unanticipated and unstable benefit or tax levels. While most indexes will also pose methodological issues, these can become problematic to address after the index has already been widely used, and the correction will have implications for benefits or taxes. An example is the current measurement limitations of the CPI. In other instances, the index may be based on estimates about future trends in variables like mortality that could later prove incorrect and erode public confidence in the system. Some indexes are premised on the past behavior of economic or demographic relationships. If these long-standing relationships diverge for a significant period of time, they may result in unanticipated and unstable benefit or tax levels. For example, the 1972 amendments that introduced indexing into the Social Security program were premised on the belief that over time, wage growth will generally substantially exceed price inflation. However, for much of the 1970s, actual inflation rates and earnings growth diverged markedly from past experience; price inflation grew much faster than wages, with the result that benefits grew much faster than anticipated. This development introduced major instability into the program, which was unsustainable. Congress addressed this problem when it passed the 1977 amendments. Moreover, even though the 1977 amendments succeeded in substantially stabilizing the replacement rates for initial benefits, a solvency crisis required reforms just 6 years later with the 1983 amendments. High inflation rates resulted in high COLAs for existing benefits just as recession was depressing receipts from the payroll taxes. The indexing of initial benefits under the 1977 amendments did not address the potential for such economic conditions to affect COLAs or payroll tax receipts. Many indexes have methodological issues associated with their calculation, which can become problems over time. For example, the CPI has long been in use by the Social Security program and other social welfare programs. However, the CPI is not without its methodological problems. Some studies have contended that the CPI overstates inflation for a number of reasons, including that it does not account for how consumers can substitute one good for another because the calculation assumes that consumers do not change their buying patterns in response to price changes. Correcting for this “substitution effect” would likely lower the CPI. Changing the calculation in response to this concern might improve accuracy but is controversial because it would also likely result in lower future benefits and put more judgment into the calculation. Indexes that are constructed around assumptions about future experience raise other methodological issues. An example is a mortality index, which seeks to measure future changes in population deaths. Such a measure would presumably capture an aspect of increased longevity or well-being in retirement and could be viewed as a relevant determinant of program benefits or taxes. Accuracy in this index would require forecasts of future mortality based on assumptions of the main determinants influencing future population deaths (i.e., medical advances, diet, income changes). Such forecasts would require a clear consensus about these factors and how to measure and forecast them. However, currently there is considerable disagreement among researchers in terms of their beliefs about the magnitude of mortality change in the future. In choosing an index, such methodological issues would need to be carefully considered to maintain public support and confidence. Under the current structure of the U.S. Social Security system, the OASI and DI programs share the same benefit formula. Thus, any changes that affect retired workers will also affect survivors and disabled workers. However, the circumstances facing these beneficiaries differ from those facing retired workers. For example, the disabled worker’s options for alternative sources of income, especially earnings-related income, to augment any reduction in benefits are likely to be more limited than are those for the retired worker. Further, DI beneficiaries enter the program at younger ages and may receive benefits for many years. As a result, disabled beneficiaries could be subject to benefit changes for many years more than those beneficiaries requiring benefits only in retirement. These differing circumstances among beneficiaries raise the issue of whether any proposed indexing changes, or any other benefit changes, should be applied to disabled worker and survivor beneficiaries, as well as to retired worker beneficiaries. If disabled worker beneficiaries are not subject to indexing changes applied to retirees, benefit levels for disabled workers could ultimately be higher than those of retired workers. This difference in benefit levels would occur because disabled workers typically become entitled to benefits sooner than retired workers, and thus any reductions in their replacement factors would be smaller. Such a differential could increase the incentive for older workers to apply for disability benefits as they near retirement age. Excluding the disability program from indexing changes has implications for solvency and raises implementation issues. If the indexing changes are not applied to the disability program, even larger benefit reductions or revenue increases would be needed to achieve fiscal solvency. Since the OASI and DI programs share the same benefit formula, excluding disabled worker beneficiaries from indexing changes might also necessitate the use of two different benefit formulas or require a method to recalculate benefits in order to maintain different indexing in each program. Such changes could lead to confusion among the public about how the programs operate, which may require significant additional public education. Indexing has played an important role in the determination of Social Security’s benefits and revenues for over 30 years. As in other countries seeking national pension system reform, recent proposals to modify the role of indexing in Social Security have primarily focused on addressing the program’s long-term solvency problems. In theory, one index may be better than another in keeping the program in financial balance on a sustainable basis. However, such a conclusion would be based on assumptions about the future behavior of various demographic and economic variables, and those assumptions will always have considerable uncertainty. Future demographic patterns and economic trends could emerge that affect solvency in ways that have not been anticipated. So, while indexing changes may reduce how often Congress needs to rebalance the program’s finances, there is no guarantee that the need will not arise again. Yet program reform, and the role of indexing in that reform, is about more than solvency. Reforms also reflect implicit visions about the size, scope, and purpose of the Social Security system. Indexing initial benefits, existing benefits, tax rates, the maximum taxable earnings level, or some other parameter or combination will have different consequences for the level and distribution of benefits and taxes, within and across generations and earnings levels. These questions relate to the trade-off between income adequacy and benefit equity. In the final analysis, indexing, like other individual reforms, comes down to a few critical questions: What is to be accomplished or achieved, who is to be affected, is it affordable and sustainable, and how will the change be phased in over time? Although these issues are complex and controversial, they are not unsolvable; they have been reconciled in the past and can be reconciled now. Indexing can be part of a larger, more comprehensive reform package that would include other elements whose cumulative effect could achieve the desired balance between adequacy and equity while also achieving solvency. The challenge is not whether indexing should be part of any necessary reforms, but that necessary action is taken soon to put Social Security back on a sound financial footing. We provided a draft of this report to SSA and the Department of the Treasury. SSA provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to the Social Security Administration and the Treasury Department, as well as other interested parties. Copies will also be made available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-7215, if you have any questions about this report. Other major contributors include Charles Jeszeck, Michael Collins, Anna Bonelli, Charles Ford, Ken Stockbridge, Seyda Wentworth, Joseph Applebaum, and Roger Thomas. Genuine Microsimulation of Social Security and Accounts (GEMINI) is a microsimulation model developed by the Policy Simulation Group (PSG). GEMINI simulates Social Security benefits and taxes for large representative samples of people born in the same year. GEMINI simulates all types of Social Security benefits, including retired worker, spouse, survivor, and disability benefits. It can be used to model a variety of Social Security reforms including the introduction of individual accounts. GEMINI uses inputs from two other PSG models, the Social Security and Accounts Simulator (SSASIM), which has been used in numerous GAO reports, and the Pension Simulator (PENSIM), which has been developed for the Department of Labor. GEMINI relies on SSASIM for economic and demographic projections and relies on PENSIM for simulated life histories of large representative samples of people born in the same year and their spouses. Life histories include educational attainment, labor force participation, earnings, job mobility, marriage, disability, childbirth, retirement, and death. Life histories are validated against data from the Survey of Income and Program Participation, the Current Population Survey, Modeling Income in the Near Term (MINT3), and the Panel Study of Income Dynamics. Additionally, any projected statistics (such as life expectancy, employment patterns, and marital status at age 60) are, where possible, consistent with intermediate cost projections from Social Security Administration’s Office of the Chief Actuary (OCACT). At their best, such models can provide only very rough estimates of future incomes. However, these estimates may be useful for comparing future incomes across alternative policy scenarios and over time. GEMINI can be operated as a free-standing model or it can operate as a SSASIM add-on. When operating as an add-on, GEMINI is started automatically by SSASIM for one of two purposes. GEMINI can enable the SSASIM macro model to operate in the Overlapping Cohorts (OLC) mode or it can enable the SSASIM micro model to operate in the Representative Cohort Sample (RCS) mode. The SSASIM OLC mode requests GEMINI to produce samples for each cohort born after 1934 in order to build up aggregate payroll tax revenues and OASDI benefit expenditures for each calendar year, which are used by SSASIM to calculate standard trust fund financial statistics. In either mode, GEMINI operates with the same logic, but typically with smaller cohort sample sizes in OLC mode than in the RCS or stand-alone-model mode. For this report we used GEMINI to simulate Social Security benefits and taxes primarily for 100,000 individuals born in 1985. Benefits and taxes were simulated under our tax increase (promised benefits) and proportional benefit reduction (funded benefits) benchmarks (described below) and various indexation approaches. According to current projections of the Social Security trustees for the next 75 years, revenues will not be adequate to pay full benefits as defined by the current benefit formula. Therefore, estimating future Social Security benefits should reflect that actuarial deficit and account for the fact that some combination of benefit reductions and revenue increases will be necessary to restore long-term solvency. To illustrate a full range of possible outcomes, we developed hypothetical benchmark policy scenarios that would achieve 75-year solvency either by only increasing payroll taxes or by only reducing benefits. In developing these benchmarks, we identified criteria to use to guide their design and selection. Our tax-increase-only benchmark simulates “promised benefits,” or those benefits promised by the current benefit formula, while our benefit-reduction-only benchmarks simulate “funded benefits,” or those benefits for which currently scheduled revenues are projected to be sufficient. Under the latter policy scenarios, the benefit reductions would be phased in between 2010 and 2040 to strike a balance between the size of the incremental reductions each year and the size of the ultimate reduction. SSA actuaries scored our original 2001 benchmark policies and determined the parameters for each that would achieve 75-year solvency. Table 5 summarizes our benchmark policy scenarios. For our benefit reduction scenarios, the actuaries determined these parameters assuming that disabled and survivor benefits would be reduced on the same basis as retired worker and dependent benefits. If disabled and survivor benefits were not reduced at all, reductions in other benefits would be greater than shown in this analysis. According to our analysis, appropriate benchmark policies should ideally be evaluated against the following criteria: 1. Distributional neutrality: The benchmark should reflect the current system as closely as possible while still restoring solvency. In particular, it should try to reflect the goals and effects of the current system with respect to redistribution of income. However, there are many possible ways to interpret what this means, such as a. producing a distribution of benefit levels with a shape similar to the distribution under the current benefit formula (as measured by coefficients of variation, skewness, kurtosis, and so forth), b. maintaining a proportional level of income transfers in c. maintaining proportional replacement rates, and d. maintaining proportional rates of return. 2. Demarcating upper and lower bounds: These would be the bounds within which the effects of alternative proposals would fall. For example, one benchmark would reflect restoring solvency solely by increasing payroll taxes and therefore maximizing benefit levels, while another would solely reduce benefits and therefore minimize payroll tax rates. 3. Ability to model: The benchmark should lend itself to being modeled within the GEMINI model. 4. Plausibility: The benchmark should serve as a reasonable alternative within the current debate; otherwise, the benchmark could be perceived as an invalid basis for comparison. 5. Transparency: The benchmark should be readily explainable to the reader. Our tax-increase-only benchmark would raise payroll taxes once and immediately by the amount of Social Security’s actuarial deficit as a percentage of payroll. It results in the smallest ultimate tax rate of those we considered and spreads the tax burden most evenly across generations; this is the primary basis for our selection. The later that taxes are increased, the higher the ultimate tax rate needed to achieve solvency, and in turn the higher the tax burden on later taxpayers and lower on earlier taxpayers. Still, any policy scenario that achieves 75-year solvency only by increasing revenues would have the same effect on the adequacy of future benefits in that promised benefits would not be reduced. Nevertheless, alternative approaches to increasing revenues could have very different effects on individual equity. We developed alternative benefit reduction benchmarks for our analysis. For ease of modeling, all benefit reduction benchmarks take the form of reductions in the benefit formula factors; they differ in the relative size of those reductions across the three factors, which are 90, 32, and 15 percent under the current formula. Each benchmark has three dimensions of specification: scope, phase-in period, and the factor changes themselves. For our analysis, we apply benefit reductions in our benchmarks very generally to all types of benefits, including disability and survivors’ benefits as well as old-age benefits. Our objective is to find policies that achieve solvency while reflecting the distributional effects of the current program as closely as possible. Therefore, it would not be appropriate to reduce some benefits and not others. If disabled and survivor benefits were not reduced at all, reductions in other benefits would be deeper than shown in this analysis. We selected a phase-in period that begins with those becoming initially entitled in 2010 and continues for 30 years. We chose this phase-in period to achieve a balance between two competing objectives: (1) minimizing the size of the ultimate benefit reduction and (2) minimizing the size of each year’s incremental reduction to avoid “notches,” or unduly large incremental reductions. Notches create marked inequities between beneficiaries close in age to each other. Later birth cohorts are generally agreed to experience lower rates of return on their contributions already under the current system. Therefore, minimizing the size of the ultimate benefit reduction would also minimize further reductions in rates of return for later cohorts. The smaller each year’s reduction, the longer it will take for benefit reductions to achieve solvency, and in turn the greater the eventual reductions will have to be. However, the smallest possible ultimate reduction would be achieved by reducing benefits immediately for all new retirees by 13 percent; this would create a notch. In addition, we feel it is appropriate to delay the first year of the benefit reductions for a few years because those within a few years of retirement would not have adequate time to adjust their retirement planning if the reductions applied immediately. The Maintain Tax Rates (MTR) benchmark in the 1994-1996 Advisory Council report also provided for a similar delay. Finally, the timing of any policy changes in a benchmark scenario should be consistent with the proposals against which the benchmark is compared. The analysis of any proposal assumes that the proposal is enacted, usually within a few years. Consistency requires that any benchmark also assumes enactment of the benchmark policy in the same time frame. Some analysts have suggested using a benchmark scenario in which Congress does not act at all and the trust funds become exhausted. However, such a benchmark assumes that no action is taken while the proposals against which it is compared assume that action is taken, which is inconsistent. It also seems unlikely that a policy enacted over the next few years would wait to reduce benefits until the trust funds are exhausted; such a policy would result in a sudden, large benefit reduction and create substantial inequities across generations. When workers retire, become disabled, or die, Social Security uses their lifetime earnings records to determine each worker’s PIA, on which the initial benefit and auxiliary benefits are based. The PIA is the result of two elements—the Average Indexed Monthly Earnings (AIME) and the benefit formula. The AIME is determined by taking the lifetime earnings record, indexing it, and taking the average of the highest 35 years of indexed wages. To determine the PIA, the AIME is then applied to a step-like formula, shown here for 2006. 90% (AIME ≤ $656) + 32% (AIME > $656 and ≤ $3955) + 15% (AIME > $3955) where AIME is the applicable portion of AIME. All of our benefit-reduction benchmarks are variations of changes in PIA formula factors. Proportional reduction: Each formula factor is reduced annually by subtracting a constant proportion of that factor’s value under current law, resulting in a constant percentage reduction of currently promised benefits for everyone. That is, x) represents the three PIA formula factors in year t and x = constant proportional formula factor reduction. The value of x is calculated to achieve 75-year solvency, given the chosen phase-in period and scope of reductions. The formula for this reduction specifies that the proportional reduction is always taken as a proportion of the current law factors rather than the factors for each preceding year. This maintains a constant rate of benefit reduction from year to year. In contrast, taking the reduction as a proportion of each preceding year’s factors implies a decelerating of the benefit reduction over time because each preceding year’s factors gets smaller with each reduction. To achieve the same level of 75-year solvency, this would require a greater proportional reduction in earlier years because of the smaller reductions in later years. The proportional reduction hits lower earners harder than higher earners because the constant x percent of the higher formula factors results in a larger percentage reduction over the lower earnings segments of the formula. For example, in a year when the cumulative size of the proportional reduction has reached 10 percent, the 90 percent factor would then have been reduced by 9 percentage points, the 32 percent factor by 3.2 percentage points, and the 15 percent factor by 1.5 percentage points. As a result, earnings in the first segment of the benefit formula would be replaced at 9 percentage points less than the current formula, while earnings in the third segment of the formula would be replaced at only 1.5 percentage points less than the current formula. Table 6 summarizes the features of our benchmarks. Social Security did not originally use indexing to automatically adjust benefit and tax provisions; only ad hoc changes were made. The 1972 amendments provided for automatic indexing of benefits and taxes for the first time, but the indexing approach for benefits was flawed, introducing potential instability in benefit costs. The 1977 amendments addressed those issues, resulting in the basic framework for indexing benefits still in use today. Before the 1970s, the Social Security program did not use indexing to adjust benefits or taxes automatically. For both new and existing beneficiaries, benefit rates increased only when Congress voted to raise them. The same was true for the tax rate and the cap on the amount of workers’ earnings that were subject to the payroll tax. Under the 1972 amendments to the Social Security Act, benefits and taxes were indexed for the first time, and revisions in the 1977 amendments created the basic framework still in use today. Until 1950, Congress legislated no changes to the benefit formula of any kind. As a result, average inflation-adjusted benefits for retired workers fell by 32 percent between 1940 and 1949. Under the 1950 amendments to the Social Security Act, these benefits increased 67 percent in 1 year. Afterward, until 1972, periodic amendments made various ad hoc adjustments to benefit levels. Economic prosperity and regular trust fund surpluses facilitated gradual growth of benefit levels through these ad hoc adjustments. In light of the steady growth of benefit levels, the 1972 amendments instituted automatic adjustments to constrain the growth of benefits as well as to ensure that they kept pace with inflation. Table 7 summarizes the history of benefit increases before 1972. It illustrates that between 1940 and 1971, average benefits for all current beneficiaries tripled while prices nearly doubled and wages more than quintupled. Some benefit increases were faster and some were slower than wages increases. On the revenue side, payroll tax rates have never been indexed. However, Social Security’s revenue also depends on the maximum amount of workers’ earnings that are subject to the payroll tax. This cap is technically known as the contribution and benefit base because it limits the earnings level used to compute benefits as well as taxes. Just as with benefits, the maximum taxable earnings level did not change until the 1950 amendments even as price and earning levels were increasing. From 1940 to 1950, the inflation-adjusted value of the cap fell by over 40 percent. Also, until the 1972 amendments, adjustments to the maximum taxable earnings level were made on an ad hoc basis. With the enactment of the 1972 amendments, the maximum taxable earnings level increased automatically based on increases in average earnings. Figure 9 shows the inflation-adjusted values for the maximum taxable earnings level before automatic adjustments took effect in 1975. Figure 10 shows that as a result of the fluctuations in the maximum taxable earnings level, the proportion of earnings subject to the payroll tax varied widely before indexing, ranging from 71 to 93 percent. The 1972 amendments, in effect, provided for indexing initial benefits twice for new beneficiaries. The indexing changed the benefit formula in the same way that previous ad hoc increases had done. Before the 1972 amendments, benefits were computed essentially by applying different replacement factors to different portions of a worker’s earnings. For example, under the 1958 amendments, a workers’ PIA would equal 58.85 percent of first $110 of average monthly wages plus 21.40 percent of next $290, where the 58.85 and 21.40 percents are the replacement factors that determine how much of a worker’s earnings will be replaced by the Social Security benefit. Subsequent amendments increased benefits by effectively increasing the replacement factors. For example, the 1965 amendments increased benefits by 7 percent for a given average monthly wage by increasing the replacement factors by 7 percent to 62.97 from 58.85 and to 22.9 percent from 21.4. The automatic adjustments under the 1972 amendments increased these same replacement factors according to changes in the CPI. These changes in the benefit computation applied equally to both new and existing beneficiaries. To illustrate how the benefit formula worked, take, for example, a worker with an average monthly wage of $200 who became entitled in 1959 (when the 1958 amendments first took effect). The PIA for this worker would be 58.85 percent of $110 plus 21.4 percent of the average monthly wage over $110, that is, $200-110 = $90, which equals $64.74 + $19.26 = 84.00. When the 1965 amendments took effect, this same beneficiary would have the PIA recalculated using the new formula. Assuming no new wages, the average monthly wage would still be $200, and the new PIA would be 62.97 percent of $110 plus 22.9 percent of the average monthly wage over $110, that is, $200-110 = $90, which equals $69.27 + $20.61 = 89.88, which is 7 percent greater than the previous $84.00. Now consider the example of a new beneficiary, who became entitled in 1965 (when the 1965 amendments first became effective). For the purposes of this illustration, to reflect wage growth, assume this worker had an average monthly wage of $240.00, or 20 percent more than our previous worker who became entitled in 1959. For this new beneficiary, the PIA in 1965 would be $99.04, which, as a result of the wage growth, is much more than 7 percent higher than the initial benefit for the worker in 1959. The 1972 amendments provided for automatic indexing of benefits and taxes for the first time. The indexing approach for benefits was flawed and raised issues that the 1977 amendments addressed; these issues help explain the basic framework for indexing benefits still in use today. In particular, the indexing approach in the 1972 amendments resulted in (1) double-indexing benefits to inflation for new beneficiaries though not for existing ones and (2) a form of bracket creep that slowed benefit growth as earnings increased over time. Within a few years, the problems raised by the double indexing under the 1972 amendments became apparent, with benefits growing far faster than anticipated. Under the 1972 amendments, indexing the replacement factors in the benefit formula to inflation had the effect of indexing twice for new beneficiaries. First, the increase in the replacement factors themselves reflected changes in the price level. Second, the benefit calculations were based on earnings levels, which were higher for each new group of beneficiaries, partially as a result of inflation. Thus, benefit levels grew for each new year’s group of beneficiaries because both the benefit formula reflected inflation and their higher average wages reflected inflation. For existing beneficiaries who had stopped working, the average earnings used to compute their benefits did not change, so growth in earnings levels did not affect their benefits and double indexing did not occur. Once the double indexing for new beneficiaries was understood, the need became clear to index benefits differently for new and existing beneficiaries, which was referred to as “decoupling” benefits. The effect of double indexing on replacement rates could be offset by a type of “bracket creep” in the benefit formula, depending on the relative values of wage and price growth over time. Bracket creep resulted from the progressive benefit formula, which provided lower replacement rates for higher earners than for lower earners. As each year passed and average earnings of new beneficiaries grew, more and more earnings would be replaced at the lower rate used for the upper bracket, making replacement rates fall on average, all else being equal. The combination of double indexing and bracket creep implied in the 1972 amendments introduced a potential instability in Social Security benefit costs. Price growth determined the effects of double indexing, and wage growth determined the effects of bracket creep. The extent to which bracket creep offset the effects of double indexing depended on the relative values of price growth and wage growth, which could vary considerably. Had wage and price growth followed the historical pattern at the time, benefits would not have grown faster than expected and replacement rates would not have risen; the inflation effect and the bracket creep effect would have balanced out. However, during the 1970s, actual rates of inflation and earnings growth diverged markedly from past experience (see fig. 11), with the result that benefit costs grew far faster than revenues. In contrast, an indexing approach that stabilized replacement rates would help to stabilize program costs. To illustrate this, annual benefit costs can be expressed as a fraction of the total taxable payroll in a given year, that is, total covered earnings. In turn, this can be shown to relate closely to replacement rates. taxable earningsWhile not precisely a replacement rate, the second term on the last line above—the ratio of the average benefit to average taxable earnings—is closely related to the replacement rates provided under the program. While replacement rates are now relatively stable after the 1977 amendments, it is the first term on the last line above—the ratio of beneficiaries to workers—that has been increasing and placing strains on the system’s finances. The inverse of this is the ratio of covered workers to beneficiaries. While 3.3 workers support each Social Security beneficiary today, only 2 workers are expected to be supporting each beneficiary by 2040. (See fig. 12.) Social Security Reform: Answers to Key Questions. GAO-05-193SP. Washington, D.C.: May 2005. Options for Social Security Reform. GAO-05-649R. Washington, D.C.: May 6, 2005. Social Security Reform: Early Action Would Be Prudent. GAO-05-397T. Washington, D.C.: Mar. 9, 2005. Social Security: Distribution of Benefits and Taxes Relative to Earnings Level. GAO-04-747. Washington, D.C.: June 15, 2004. Social Security Reform: Analysis of a Trust Fund Exhaustion Scenario. GAO-03-907. Washington, D.C.: July 29, 2003. Social Security Reform: Analysis of Reform Models Developed by the President’s Commission to Strengthen Social Security. GAO-03-310. Washington, D.C.: Jan. 15, 2003. Social Security: Program’s Role in Helping Ensure Income Adequacy. GAO-02-62. Washington, D.C.: Nov. 30, 2001. Social Security Reform: Potential Effects on SSA’s Disability Programs and Beneficiaries. GAO-01-35. Washington, D.C.: Jan. 24, 2001. Social Security: Evaluating Reform Proposals. GAO/AIMD/HEHS-00-29. Washington, D.C.: Nov. 4, 1999. Social Security: Issues in Comparing Rates of Return with Market Investments. GAO/HEHS-99-110. Washington, D.C.: Aug. 5, 1999. Social Security: Criteria for Evaluating Social Security Reform Proposals. GAO/T-HEHS-99-94. Washington, D.C.: Mar. 25, 1999. Social Security: Different Approaches for Addressing Program Solvency. GAO/HEHS-98-33. Washington, D.C.: July 22, 1998. Social Security: Restoring Long-Term Solvency Will Require Difficult Choices. GAO/T-HEHS-98-95. Washington, D.C.: Feb. 10, 1998.
The financing shortfall currently facing the Social Security program is significant. Without remedial action, program trust funds will be exhausted in 2040. Many recent reform proposals have included modifications of the indexing currently used in the Social Security program. Indexing is a way to link the growth of benefits and/or revenues to changes in an economic or demographic variable. Given the recent attention focused on indexing, this report examines (1) the current use of indexing in the Social Security program and how reform proposals might modify that use, (2) the experiences of other developed nations that have modified indexing, (3) the effects of modifying the indexing on the distribution of benefits, and (4) the key considerations associated with modifying the indexing. To illustrate the effects of different forms of indexing on the distribution of benefits, we calculated benefit levels for a sample of workers born in 1985, using a microsimulation model. We have prepared this report under the Comptroller General's statutory authority to conduct evaluations on his own initiative as part of a continued effort to assist Congress in addressing the challenges facing Social Security. We provided a draft of this report to SSA and the Department of the Treasury. SSA provided technical comments, which we have incorporated as appropriate. Indexing currently plays a key role in determining Social Security's benefits and revenues, and is a central element of many proposals to reform the program. The current indexing provisions that affect most workers and beneficiaries relate to (1) benefit calculations for new beneficiaries, (2) the annual cost-of-living adjustment (COLA) for existing beneficiaries, and (3) the cap on taxable earnings. Some reform proposals would slow benefit growth by indexing the initial benefit formula to changes in prices or life expectancy rather than wages. Some would revise the COLA under the premise that it currently overstates inflation, and some would increase the cap on taxable earnings. National pension reforms in other countries have used indexing in various ways. In countries with high contribution rates that need to address solvency issues, recent changes have generally focused on reducing benefits. Although most Organisation for Economic Co-operation and Development (OECD) countries compute retirement benefits using wage indexing, some have moved to price indexing, or a mix of both. Some countries reflect improvements in life expectancy in computing initial benefits. Reforms in other countries that include indexing changes sometimes affect both current and future retirees. Indexing can have various distributional effects on benefits and revenues. Changing the indexing of initial benefits through the benefit formula typically results in the same percentage change in benefits across income levels regardless of the index used. However, indexing can also be designed to maintain benefits for lower earners while reducing or slowing the growth of benefits for higher earners. Indexing payroll tax rates would maintain scheduled benefit levels but reduce the ratio of benefits to contributions for younger cohorts. Finally, the effect of modifying the COLA would be greater the longer people collect benefits. Indexing raises considerations about the program's role, the treatment of disabled workers, and other issues. For example, indexing initial benefits to prices instead of wages implies that benefit levels should maintain purchasing power rather than maintain relative standards of living across age groups (i.e., replacement rates). Also, as with other ways to change benefits, changing the indexing of the benefit formula to improve solvency could also result in benefit reductions for disabled workers as well as retirees.
VBA provides benefits for veterans and their families through five programs: (1) compensation and pension, (2) education, (3) vocational rehabilitation and employment (VRE) services, (4) loan guaranty, and (5) life insurance. It relies on the BDN to administer benefit programs for three of VBA’s five programs: compensation and pension, education, and VRE services. Replacing the aging BDN has been a focus of systems development efforts at VBA since 1986. Originally, the administration planned to modernize the entire system, but after experiencing numerous false starts and spending approximately $300 million on the overall modernization of the BDN, VBA revised its strategy in 1996. It narrowed its focus to replacing only those functionalities that support the compensation and pension program, and began developing a replacement system, which it called VETSNET. As reported by the department in its fiscal year 2008 budget submission, the compensation and pension program is the largest of the three programs that the BDN supports: The compensation and pension program paid about $35 billion in benefits in fiscal year 2006 to about 3.6 million veterans or veterans’ family members. Of this amount, compensation programs paid benefits of about $31 billion to about 3.1 million recipients. Pension programs paid benefits of about $3.5 billion to about 535,000 recipients. The education program paid about $2.8 billion to about 498,000 veterans or their dependents in fiscal year 2006. The VRE services program paid about $574 million for VRE services in 2006 and provides rehabilitation services to approximately 65,700 disabled veteran participants per year. One of the challenges of developing the replacement system is that it must include processes to support the administration of a complex set of benefits. Different categories of veterans and their families are eligible for a number of different types of benefits and payments, some of which are based on financial need. Compensation programs, which are based on service-connected disability or death, provide direct payments to veterans and/or veterans’ dependents and survivors. These programs are not based on income. Pension benefits programs, on the other hand, are income based; these are designed to provide income support to eligible veterans and their families who experience financial hardship. Eligible veterans are those who served in wartime and are permanently and totally disabled for reasons that are not service-connected (or who are age 65 or older). Veterans are also eligible for burial benefits. Survivor benefits may be paid to eligible survivors of veterans, depending on the circumstances. Some of these benefits are based on financial need, such as death pensions for some surviving spouses and children of deceased wartime veterans, and Dependency and Indemnity Compensation to some surviving parents. Finally, certain benefits may be paid to third parties, such as individuals to whom a veteran has given power of attorney or medical service providers designated to receive payments on the veteran’s behalf. Generally, VBA administers benefit programs through 57 veterans benefits regional offices in a process that requires a number of steps, depending on the type of claim. When a veteran submits, for example, a compensation claim to any of the regional offices, a veterans service representative must obtain the relevant evidence to evaluate the claim (such as the veteran’s military service records, medical examinations, and treatment records from VA medical facilities or private medical service providers). In the case of pension claims, income information would also be collected. Once all the necessary evidence has been compiled, a rating specialist evaluates the claim and determines whether the claimant is eligible for benefits. If the veteran is determined to be eligible for disability compensation, the Rating Veterans Service Representative assigns a percentage rating based on the veteran’s degree of disability. This percentage is used in calculating the amount of payment. Benefits received by veterans are subject to change depending on changing circumstances. More than half of VBA’s workload consists of dealing with such changes. If a veteran believes that a service-connected condition has worsened, for example, the veteran may ask for additional benefits by submitting another claim. The first claim submitted by a veteran is referred to as the original claim, and a subsequent change is referred to as a reopened claim. Since its inception, VETSNET has been plagued by problems. Over the years, we have reported on the project, highlighting concerns about VBA’s software development capabilities. In 1996, our assessment of the department’s software development capability determined that it was immature. In our assessment, we specifically examined VETSNET and concluded that VBA could not reliably develop and maintain high-quality software on any major project within existing cost and schedule constraints. The department showed significant weaknesses in requirements management, software project planning, and software subcontract management, with no identifiable strengths. We also testified that VBA did not follow sound systems development practices on VETSNET, and we concluded that its modernization efforts had inherent risks. Between 1996 and 2002, we continued to identify the department’s weak software development capability as a significant factor contributing to persistent problems in developing and implementing the system. We also reported that VBA continued to work on VETSNET without an integrated project plan. As a result, the development of the system continued to suffer from problems in several areas, including project management, requirements development, and testing. Over the years, we made several recommendations aimed at improving VA’s software development capabilities. Among our recommendations was that the department take actions to achieve greater maturity in its software development processes and that it delay any major investment in software development (beyond that needed to sustain critical day-to-day operations) until it had done so. In addition, we made specific recommendations aimed at improving VETSNET development. For example, we recommended that VA appoint a project manager, thoroughly analyze its current initiative, and develop a number of plans, including a revised compensation and pension replacement strategy and an integrated project plan. VA concurred with our recommendations and took several actions to address them. For example, it appointed a full-time project manager and ensured that business needs were met by certification of user requirements for the system applications. The actions taken addressed some of our specific concerns; however, they were not sufficient to fully implement our recommendations or to establish the program on a sound footing. As a result of continuing concerns about the replacement project, in 2005 VA’s CIO and its Under Secretary for Benefits contracted for an independent assessment of the department’s options for the initiative. The chosen contractor, SEI, is a federally funded research and development center operated by Carnegie Mellon University. Its mission is to advance software engineering and related disciplines to ensure the development and operation of systems with predictable and improved cost, schedule, and quality. SEI recommended that the department reduce the pace of development while at the same time taking an aggressive approach to dealing with management and organizational weaknesses hampering VBA’s ability to complete the replacement system. According to SEI, these management and organizational concerns needed to be addressed before the replacement initiative or any similar project could deliver a full, workable solution. For example, the contractor stressed the importance of setting realistic deadlines and commented that there was no credible evidence that VETSNET would be complete by the target date, which at the time of the review had slipped to December 2006. According to the assessment, because this deadline was unrealistic, VBA needed to plan and budget for supporting the BDN so that its ability to pay veterans’ benefits would not be disrupted. SEI also noted that different organizational components had independent schedules and priorities, which caused confusion and deprived the department of a program perspective. Further, the contractor concluded that VBA needed to give priority to establishing sound program management to ensure that the project could meet targeted dates. These and other observations were consistent with our long-standing concerns regarding fundamental deficiencies in VBA’s management of the project. To help VBA implement the overall recommendation, the contractor’s assessment included numerous discussions of activities needed to address these areas of concern, which can be generally categorized as falling into two major types: Overall management concerns with regard to the initiative included governance structure, including assigning ownership for the project project planning, including the development schedule and capacity conversion of records currently on the BDN to the replacement system. Software development process improvements were needed in the program measures. As recommended by SEI, VBA is continuing to work on the replacement initiative at a reduced pace and taking action to address identified weaknesses in the project’s overall management and software development processes. For example, VBA has established a new governance structure and has developed an integrated master schedule that provides additional time and includes the full range of project activities. However, additional effort is needed to complete a number of the corrective actions, such as improving project accountability through monitoring and reporting all project costs. Further, VBA has not yet institutionalized many of the improvements that it has undertaken for the initiative. In particular, process improvements remain in draft and have not been established through documented policies and procedures. According to the VETSNET management team, it gave priority to other activities, such as establishing appropriate governance and organizational structures, and it is still gathering information to assist in prioritizing the activities that remain. Nonetheless, if VBA does not institutionalize these improvements, it increases the risk that these process improvements may not be maintained through the life of the project or be available for application to other development initiatives. SEI concluded that VBA’s management issues would need to be addressed as part of the implementation of its overall recommendation. SEI’s overall management concerns focused on the project governance, project planning, and conversion of records currently on the BDN to the replacement system. SEI guidance for software development stresses the need for organizational commitment and the involvement of senior management in overall project governance. In its assessment, SEI noted that because management of the VETSNET project had been assigned to VBA’s information technology (IT) group, certain activities critical to the veterans’ benefits program, but not traditionally managed by the IT group, had not been visible to the project’s management. The contractor pointed out that the IT group, business lines, and regional offices needed to share ownership and management of the replacement project through an established governance process and that the project management office should include business representatives. According to SEI, the project needed to establish ownership responsibility, including addressing total system and process operating costs. In response to the assessment, VBA developed a new governance structure for the initiative, which the Under Secretary for Benefits approved in March 2006. In the new structure, the VETSNET Executive Board that had been in place was expanded and reorganized to serve as a focal point and major governance mechanism for the replacement initiative. A Special Assistant (reporting directly to the Under Secretary) was appointed to coordinate and oversee the initiative as the head of the VETSNET Executive Team, which was established to provide day-to-day operational control and oversight of the replacement initiative. Implementation Teams were also established to conduct the day-to-day activities associated with implementing the initiative. This governance structure established a process for IT, business lines, and regional offices to share ownership and management, as SEI advised. The roles and membership of each of the organizational elements in the new governance structure are described in table 1. When the new governance structure was approved in March 2006, the Under Secretary ensured that those involved in the project gave it high priority, directing certain key personnel (such as members of the executive and implementation teams) to make the initiative their primary responsibility, and other personnel (technical staff that provide support to other systems) with collateral (non-VETSNET) duties to make the project their first priority. He also placed limitations on the transfer of personnel away from the project, recognizing the importance of staff continuity in successfully completing the initiative. Staff members assigned project responsibilities could be reassigned (i.e., given nonpromotion, lateral reassignments) only with approval from the Under Secretary or his deputy. By implementing the new governance and organizational structure and ensuring that the project has priority, VBA partially responded to SEI’s concerns in this area; however, VBA has not yet taken action with regard to ownership responsibility for total system and process operating costs, as SEI advised. According to administration officials, the replacement initiative is an in-house, contractor-assisted development effort, in which three different contractors provide support for program management, system development, and testing and validation of requirements. VA reported VETSNET system costs to the Congress totaling about $89 million for fiscal years 1996 through 2006, with additional estimated costs for completion of the initiative in 2009 of about $62.4 million. However, according to project management officials, these costs do not include expenditures for in-house development work. This in-house work involves many VA personnel, as well as travel to various locations for testing and other project related activities. Thus, considerable costs other than contract cost have been incurred, which have not been tracked and reported as costs for the replacement initiative. Without comprehensive tracking and reporting of costs incurred by the replacement project, the ability of VBA and the Congress to effectively monitor progress could be impaired. A second major area of overall management concern was project planning. In particular, the lack of an integrated master schedule for the VETSNET project was a major concern articulated by SEI, as well as in our prior work. An integrated project plan and schedule should incorporate all the critical areas of system development and be used as a means of determining what needs to be done and when, as well as measuring progress. Such an integrated schedule should consider all dependencies and include subtasks so that deadlines are realistic, and it should incorporate review activities to allow oversight and approval by high-level managers. Among other things, the program plan should also include capacity requirements for resources and technical facilities to support development, testing, user validation, and production. SEI was specifically concerned that releases with overlapping functionality were being developed at the same time, with insufficient time to document or test requirements; this approach constrained resources and added complexity because of the need to integrate completed applications and newly developed functionality. In addition, SEI observed that the VETSNET program suffered from lack of sufficient test facilities because it did not have enough information to plan for adequate capacity. In response to these project planning concerns, VETSNET management, with contractor support, developed an integrated master schedule to guide development and implementation of the remaining functionalities for the replacement system. The VETSNET Integrated Master Schedule, finalized in September 2006, includes an end-to-end plan and a master schedule. According to VBA, the end-to-end plan documents the end state of the project from a business perspective, which had not previously been done. The master schedule identifies the necessary activities to manage and control the replacement project through completion. The schedule also describes a new software release process that provides more time to work on requirements definition and testing, and allows for more cross- organizational communications to lessen the possibility of not meeting requirements. In addition, the new release process includes a series of management reviews to help control the software development process and ensure that top management has continuous visibility of project related activities. These reviews occur at major steps in the system development life cycle (as described in fig. 1: initiation, preliminary design, and so on). Such reviews are intended to ensure that the VETSNET Executive Team and the VETSNET Executive Board agree and accept that the major tasks of each step have been properly performed. Nonetheless, while the Integrated Master Schedule is an important accomplishment, it may not ensure that the project sufficiently addresses capacity planning, one of SEI’s areas of concern. According to its assessment, capacity requirements for the fully functional production system were unclear. Capacity planning is important because program progress depends on the availability of necessary system capacity to perform development and testing; adjustments to such capacity take time and must be planned. If systems do not have adequate capacity to accommodate workload, interruptions or slowdowns could occur. According to SEI, capacity adjustments cannot be made instantly, and program progress will suffer without sufficient attention to resource requirements. However, the VETSNET Integrated Master Schedule does not identify activities or resources devoted to capacity planning. According to officials, the capacity of the corporate environment (that is, corporate information systems, applications, and networks) is being monitored by operational teams with responsibility for maintaining this environment. According to project officials, VETSNET representatives participate in daily conference calls in which the performance of corporate applications is discussed, and changes in application performance are reported to the VETSNET developers for investigation and corrective action. Project officials reported that when a performance degradation occurred in some transactions during performance testing, it was determined that additional computing capacity was needed and would be acquired. One reason why the occurrence of degradation had not been anticipated by the VETSNET project was that capacity planning had not taken place. Unless it ensures that capacity planning and activities are included in the Integrated Master Schedule, the replacement project may face other unanticipated degradations that it must react to after the fact, thus jeopardizing the project’s cost, schedule, and performance. In its assessment, SEI questioned VBA’s approach to developing functionality while concurrently converting records from the BDN to the replacement system. It noted that VBA had chosen to complete software development according to location rather than according to the type of functionality. Specifically, in 2004, VBA began an effort to remove all claims activity (both new and existing claims) at one regional office (Lincoln, Nebraska) from the BDN to the replacement system, developing the software as necessary to accommodate processing the types of claims encountered at that site. The intention was to address each regional office in turn until all sites were converted. According to SEI, this approach had resulted in the development being stalled by obstacles arising from the variety of existing claims. The contractor advised VBA to focus first on developing functionality to process original claims and discontinue efforts to convert existing claims until all the necessary functionality had been developed, and the replacement system’s ability to handle new cases of any complexity had been proven by actual experience. In accordance with this advice, VBA stopped converting existing records from the BDN and changed its focus to developing the necessary functionality to process all new compensation claims. According to the integrated master schedule, conversion activities are now timed to follow the release of the needed functionality. That is, according to the schedule, VBA plans to begin converting each type of record from the BDN only after the necessary functionality for the replacement system has been developed and deployed to process that type of record. In addition, the project is mitigating risk by resuming conversions beginning with a test phase. Its strategy is first to convert records for terminated claims—claims that are no longer being paid. Conversion of the terminated records will be followed by additional conversions of records for claims receiving payment at Lincoln and Nashville (these two sites are being used to test system functionality during development). The VETSNET leadership will consider testing complete with the successful conversion at these two sites. However, SEI raised three additional issues with regard to the conversion of records that VBA has not fully addressed: First, SEI expressed concerns that conversion failures could lead to substantial numbers of records being returned to the BDN. Because of differences in the database technologies used for the old system and the replacement system, certain types of errors in BDN records cause conversion to fail (according to SEI, approximately 15 percent of all these records are estimated to have such errors). If records fail to convert correctly, they may need to be returned to the BDN so that benefits can continue to be paid. However, this process is not simple and may involve manually reentering the records. Second, SEI observed that VBA was also depending on manual processes for determining that records were converted successfully, including the use of statistically random samples, and that it was aiming to ensure correctness to a confidence level of 95 percent. However, in the absence of a straightforward method for automatically returning records to the BDN, SEI considered the 5 percent risk of error unacceptable for conversions of large numbers of records. Finally, SEI observed that the lack of automated methods and the complexity of the processes meant that conversions required careful planning and assurance that adequate staff would be available to validate records when the conversions took place. However, the VETSNET leadership has not developed any strategy to address the possibility that a large number of cases might need to be returned to the BDN during the testing phase. For example, it has not included this possibility as a risk in its risk management plan. The absence of a strategy to address this possibility could lead to delays in program execution. Further, VBA has not yet decided whether a possible 5 percent error rate is acceptable or developed a plan for addressing the resulting erroneous records. If VBA does not address these issues in its planning, it increases the risk that veterans may not receive accurate or timely payments. Finally, the VETSNET leadership has not yet developed detailed plans that include the scheduled conversions for each regional office and identified staff to perform the necessary validation. Having such plans would reduce the risk that the conversion process could be delayed or fail. In addition to actions addressing the overall management concerns identified by SEI, VBA has steps action to improve its software development processes in risk management, requirements management, defect/change management, and performance measures. SEI described weaknesses in all of these areas. The steps taken have generally been effective in addressing the identified weaknesses, but VBA has not yet institutionalized many of these improvements. According to the VETSNET management team, it made a conscious decision first to establish the governance, build the organization, implement processes to gain control, and gather additional information about the project to assist in prioritizing the remaining activities. The team also stated that some of the processes are no longer VBA’s responsibility but are now that of the newly realigned Office of Information and Technology. Nonetheless, if VA does not develop and establish documented policies and procedures to institutionalize these improvements, they may not be maintained through the life of the project or available to be applied to other development initiatives. Risk management is a process for identifying and assessing risks, their impact and status, the probability of their occurrence, and mitigation strategies. Effective risk management includes the development of a risk management plan and tracking and reporting progress against the plan. According to SEI, to the extent that risk management existed at all in the replacement program, it was conducted on a pro forma basis without real effect on program decisions. SEI said that risks and risk mitigation activities needed to be incorporated into all aspects of program planning, budgeting, scheduling, execution, and review. In response to these concerns, VBA has instituted risk management activities that, if properly implemented, should mitigate the risks associated with the project. Specifically, the VETSNET team, with contractor support, developed a risk management plan that was adopted in January 2007. The plan includes procedures for identifying, validating, analyzing, assessing, developing mitigation strategies for, controlling and tracking, reporting, and closing risks. It also establishes criteria for assessing the severity of the risks and their impact. The VETSNET leadership also developed a Risk Registry database, and its contractor reviewed and prioritized the open risks. Each open risk was evaluated, and a proposed disposition of the risk was submitted to VETSNET management. Of the 39 open risks, all but 3 had been addressed as of January 2007. The development documentation for each planned software release also includes sections on risk. In accordance with these plans, the VETSNET leadership is currently capturing potential risks and tracking action items and issues. At weekly status meetings, VETSNET leadership reviews Risk Registry reports of open risks. According to the contractor, the reports identify each risk and provide information on its age, ownership, and severity. However, these risk management activities have not yet been institutionalized through the definition and establishment of associated policies and procedures. If it does not institutionalize these improvements, VBA increases the possibility that the VETSNET project’s improvements in risk management may not be maintained through the life of the project. Requirements management is a process for establishing and maintaining a common understanding between the business owners and the developers of the requirements to be addressed, as well as verifying that the system meets the agreed requirements. SEI’s report commented that the VETSNET project requirements were not stable, and that the business owners (including subject-matter experts) and developers were separated by many organizational layers, resulting in confusion and delays in development of the system. SEI suggested that VA restructure project activities to focus on defining an effective requirements process. According to SEI, the project needed to ensure that subject-matter experts were included in developing requirements and that evaluation criteria were established for prioritizing requests for changes to requirements. Finally, business owners should confirm that the system is meeting organizational needs. VBA has instituted requirements management activities that, if properly implemented, should help avoid the instability and other requirements problems identified by SEI. Specifically, VBA took steps to establish a requirements management process and to stabilize the requirements. For example, the development release process in the Integrated Master Schedule includes a phase for requirements identification. In addition, the project has established and begun applying evaluation criteria to prioritize change requests for its development releases. Further, until all claims are completely migrated from the BDN to the replacement system, in July 2006, the Under Secretary directed that any additional requirements would have to have his approval. Responding to SEI’s advice regarding the involvement of subject-matter experts and business owners, VBA designed the new release process to directly involve subject-matter experts in requirements workshops. Further, the business teams participate in user-acceptance testing. However, these requirements management activities have not yet been institutionalized through the definition and establishment of policies and procedures. Until they are established, VBA runs the risk that the improved processes will not be maintained through the life of the VETSNET project or used in other software development projects. SEI raised numerous concerns regarding the defect process for the replacement system. These concerns for defect management included (1) identification of defects, (2) determination of cause, and (3) disposition of defects—either by correction or workaround. According to SEI guidance, defect management prevents known defects from hampering the progress of the program. The management process should include clearly identifying and tracking defects, analyzing defects to establish their cause, tracking their disposition, clearly identifying the rationale for not addressing any defects (as well as proposing workarounds), and making information on defects and their resolution broadly available. SEI’s report stated that VBA needed to distinguish defects from changes to requirements and develop a process for defect management. To respond to these concerns and focus program management attention on major defects, the VETSNET Executive Team, with contractor support, conducted an audit of existing defects and revised the defect management process. The audit of the defect database determined that the VETSNET database used to capture software defects also included change requests; as a result, work required to address processes that did not work properly was not distinguished from requests for added or changed functionality, which would require review and approval before being addressed. To address this issue, the team separated defects from change requests, and a new severity rating scale was developed. All open defects were recategorized to ensure the major defects would receive appropriate program management attention. Also, all defect categorizations must meet the approval of the VETSNET Business Architect and are scheduled for action as dictated by the severity level. Although these steps address many of SEI’s concerns regarding VBA’s defect management process, more remains to be done before the process is institutionalized. The Program Management Office has reported that actions to revise the defect management process are complete, but the process description is still in draft, and policies and procedures have not been fully established. Without institutionalized policies and procedures for the defect management process, it may not be maintained consistently through the life of the project. According to SEI, performance measures are the only effective mechanism that can provide credible evidence of a program’s progress. The chosen measures must link directly to the expected accomplishments and goals of the system, and they must be applied across all activities of the program. In its report, SEI stated that although VBA was reporting certain types of performance measures, it was not relating these to progress in system development. For example, VBA reported the total number of veterans paid, but did not provide estimates of how many additional veterans would be paid when the system incorporated specific functionalities that were under development. SEI suggested several measures that would provide more evidence of progress, such as increases in the percentage of original claims being paid by the replacement system, as well as user satisfaction and productivity gains resulting from use of the replacement system applications at regional offices. In response to these concerns, the replacement project has begun tracking a number of the measures suggested by SEI, including increases in the percentage of original claims being paid by the replacement system, increases in the percentage of veterans’ service representatives using the new system, decreases in the percentage of original claims being entered in the BDN rather than the replacement system. Although these measures provide indications of VA’s progress, other measures that could demonstrate the effectiveness of the replacement system have not been developed. For example, VBA has not developed results-oriented measures to capture user satisfaction or productivity gains from the system. Without measuring user satisfaction, VBA has reduced assurance that the replacement system will be accepted by the users. In addition, measures of productivity would provide VBA with another indication of progress toward meeting business needs. After more than 10 years of effort, including the recent management, organizational, and process improvements, VBA has achieved critical functionalities needed to process and pay certain original compensation claims using the replacement system, but it remains far from completing the project. For example, the replacement system is currently being used to process a portion of the original claims that veterans file for compensation. Nonetheless, the system requires further development before it can be used to process claims for the full range of compensation and pension benefits available to veterans and their dependents. In addition, VBA still faces the substantial task of moving approximately 3.5 million beneficiaries who are currently being served by the BDN to the replacement system. As designed, VETSNET consists of five major system applications that are used in processing benefits: Share—used to establish claims; it records and updates basic information about veterans and dependents both in the BDN and the replacement system. Modern Award Processing–Development (MAP-D)—used to manage the claims development process, including the collection of data to support the claims and the tracking of claims. Rating Board Automation 2000 (RBA 2000)—provides laws and regulations pertaining to disabilities, which are used by rating specialists in evaluating and rating disability claims. Award Processing (Awards)—used to prepare and calculate the benefit award based on the rating specialist’s determination of the claimant’s percentage of disability. It is also used to authorize the claim for payment. Finance and Accounting System (FAS)—used to develop the actual payment record. FAS generates various accounting reports and supports generation and audit of benefit payments. According to VBA officials, all five of the software applications that make up the new system are now being used in VA’s 57 regional offices to establish and process new compensation claims for veterans. As of March 2007, VBA leadership reported that the replacement system was providing monthly compensation payments to almost 50,000 veterans (out of about 3 million veterans who receive such payments). In addition, the replacement system has been processing a steadily increasing percentage of all new compensation claims completed: this measure was 47 percent in January 2007, increasing to 60 percent in February and 83 percent in March. Nonetheless, considerable work must be accomplished before VBA will be able to rely on the replacement system to make payments to all compensation and pension beneficiaries. Specifically, while all five software applications can now be used to process original compensation claims for veterans, two of the applications—Awards and FAS—require further development before the system will be able to process claims for the full range of benefits available to veterans and their dependents. Table 2 shows the status of development of all five applications. According to VBA officials, Awards and FAS do not yet have the capability to process original claims for payment to recipients other than veterans: that is, the applications do not have the functionality to process claims for survivor benefits and third-party/nonveteran payee claims. In addition, further development of these applications is needed to process pension benefits for qualified veterans and their survivors. Until enhancements are made to Awards and FAS, these claims must continue to be processed and paid through the BDN. Also, according to VBA, FAS does not yet have the capability to generate all the necessary accounting reports that support the development of benefits payments to claimants. As described earlier, VBA now has an Integrated Master Schedule that incorporates the activities that VBA needs to manage in order to complete the replacement project. According to the schedule, the remaining capabilities necessary to process compensation and pension claims are to be developed and deployed in three software releases, as shown in table 3. As table 3 shows, VBA does not expect to complete the development of the functionalities needed to process all new compensation and pension claims until August 2008. However, according to VBA, the estimated completion date is the planned date for completing all development and testing, but it is not necessarily the date when users will be able to begin using the new system. Before such use can begin, other activities need to occur. For example, users must receive training, and VETSNET program management must authorize the use of the system at each regional office. In addition to its remaining software development activities, VBA also faces the challenge of converting records for claims currently paid by the BDN to the replacement system. Existing compensation and pension cases on the BDN number about 3 million and about 535,000, respectively. Table 4 shows the phases in which VA is planning to perform conversions, according to its Integrated Master Schedule. As the table shows, VBA conversion efforts began in March 2007. VBA first performed conversion testing on 310,000 terminated (that is, inactive) compensation cases so that it could develop and apply lessons learned to the conversion of live records. According to VETSNET officials, VBA planned to continue testing by converting live cases at two regional offices (Lincoln and Nashville) that were used as testing sites during development. It then plans to perform the conversion of all live compensation cases. After the compensation conversion is complete, VBA plans to begin efforts to convert pension benefits cases. Based on VETSNET project documentation, activities supporting the releases have so far been performed on time, consistent with the milestones in the recently finalized Integrated Master Schedule. For example, VA completed the Project Initiation and Review Authorization for Release 1 on September 7, 2006, as scheduled (see fig. 1, shown earlier in the report, for the phases of system development and the required milestone reviews). It also completed the Preliminary Design Review and the Critical Design Review as scheduled (on November 20 and December 22, respectively). Planning for Release 2 is also on schedule: a kickoff meeting was held on January 24, 2007, which established the scope of the release, and the Project Initiation and Review Authorization was conducted on February 8. VA has responded to SEI’s assessment by making significant changes in its approach to the project and its overall management, including slowing the pace of development, establishing a new governance structure, and ensuring staff resources. However, VBA has not yet addressed all the issues raised by the SEI assessment. That is, it has not ensured ownership responsibility for total system and process operating costs, because it is not currently monitoring and reporting in-house expenditures for the project. It has not defined processes and resources for capacity planning for the project. In addition, VBA has not yet addressed issues related to the conversion of records now on the BDN to the replacement system. Specifically, it has not addressed the risk that large numbers of records may need to be returned to the BDN, decided on the degree of confidence it will require that records are converted accurately, or developed complete plans for converting and validating records. In addition, although VBA has improved key processes for managing the software development, these processes have not yet been institutionalized in defined policies and procedures, and performance measures of productivity and user satisfaction have not been developed. VETSNET management has stated that it gave priority to other activities, such as establishing appropriate governance and organizational structures, and that it is still gathering information to assist in prioritizing the activities that remain. Much work remains to be done to complete the VETSNET initiative. Although VBA has substantially increased the number of claims being paid by the replacement system, it must not only finish the development and deployment of the software, it must also convert the over 3.5 million records now on the BDN to the replacement system. Addressing the remaining issues identified by SEI would improve VBA’s chances of successfully completing the replacement system and ending reliance on the aged BDN to pay compensation and pension benefits. To enhance the likelihood that the replacement system will be successfully completed and implemented, we are recommending that the Secretary of Veterans Affairs take the following five actions: Direct the CIO to institute measures to track in-house expenditures for the project. Direct the VETSNET project to include activities for capacity planning in the VETSNET Integrated Master Schedule and ensure that resources are available for these activities. Direct VBA to (1) develop a strategy to address the risk that large numbers of records may need to be returned to the BDN; (2) determine whether a greater confidence level for accuracy should be required in the conversion process; and (3) develop a detailed validation plan that includes the scheduled conversions for each regional office and the validation team members needed for that specific conversion. Direct the CIO to document and incorporate the improved processes for managing risks, requirements, and defects into specific policy and guidance for the replacement initiative and for future use throughout VBA. Direct the replacement project to develop effective results-oriented performance measures that show changes in efficiency, economy, or improvements in mission performance, as well as measures of user satisfaction, and to monitor and report on the progress of the initiative according to these measures. In providing written comments on a draft of this report, the Secretary of Veterans Affairs agreed with our conclusions and concurred with the report’s recommendations. (The department’s comments are reproduced in app. II.) The comments described actions planned that respond to our recommendations, such as incorporating processes developed for the VETSNET project in standard project management policies, processes, and procedures that would be used for all IT projects in the department. In addition, the comments provided further information on actions already taken, such as details of the records conversion process. If the planned actions are properly implemented, they could help strengthen the department’s management of the replacement system project and improve the chances that the system will be successfully completed. We are sending copies of this report to the Chairman and Ranking Minority Member of Committee on Veterans’ Affairs. We are also sending copies to the Secretary of Veterans Affairs and appropriate congressional committees. We will make copies available to other interested parties upon request. Copies of this report will also be made available at no charge on GAO’s Web site at http://www.gao.gov. Should you or your staff have any questions about this report, please contact me at (202) 512-6304 or by e-mail at melvinv@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to determine (1) to what extent the Department of Veterans Affairs (VA) has followed the course of action recommended by the Carnegie Mellon Software Engineering Institute (SEI) and addressed the concerns that it raised and (2) the current status of the replacement project, the Veterans Service Network (VETSNET). To determine the actions taken to implement SEI’s recommended approach and address the concerns it raised, we determined the recommended actions by analyzing the report; compared the concerns identified in the assessment to actions planned, actions undertaken but not completed, and actions implemented by VA officials or contractors; interviewed contractor, VA, and VETSNET program office officials to gain an understanding about processes developed and procedures implemented; and obtained and reviewed relevant VA and contractor documents that disclosed or validated VA responses to SEI’s concerns. To determine the status of system development efforts and the extent that tasks planned for the initiative were completed, we analyzed VA and contractor documentation regarding system operations and development, time frames, and activities planned. We analyzed VA documents that disclosed costs to date and costs planned for completion of the initiative. We did not assess the accuracy of the cost data provided to us. We supplemented our analyses with interviews of VA and contractor personnel involved in the replacement initiative. We visited the Nashville and St. Petersburg regional offices to observe the replacement system in operation and the processes and procedures used to test and validate the replacement system as it was being developed and implemented. We analyzed VA documentation and relevant evidence from contractors involved in the replacement effort to establish the work remaining to complete the project. Finally, we interviewed cognizant VA and contractor officials, responsible for developing, testing, and implementing the replacement system. We performed our work at VA offices in Washington, D.C., and at VA regional offices in Nashville, Tennessee, and St. Petersburg, Florida, from April 2006 to April 2007 in accordance with generally accepted government auditing standards. In addition to the individual named above, key contributions were made to this report by Barbara Oliver, Assistant Director; Nabajyoti Barkakati; Barbara Collier; Neil Doherty; Matt Grote; Robert Williams; and Charles Youman.
Since 1996, the Veterans Benefit Administration (VBA) has been working on an initiative to replace its aging system for paying compensation and pension benefits. In 2005, concerned about the slow pace of development, VBA contracted with the Software Engineering Institute (SEI) for an independent evaluation of the project, known as the Veterans Service Network (VETSNET). SEI advised VBA to continue working on the project at a reduced pace while addressing management and organization weaknesses that it determined had hampered the project's progress. GAO was requested to determine to what extent the VETSNET project has followed the course of action recommended by SEI and describe the project's current status. To perform its review, GAO analyzed project documentation, conducted site visits, and interviewed key program officials. VBA is generally following the course of action recommended by SEI by continuing to work on the replacement initiative at a reduced pace, while taking action to address identified weaknesses in overall management and software development processes. For example, VBA established a new governance structure for the initiative that included senior management and involved all stakeholders, and it incorporated all critical areas of system development in an integrated master schedule. However, not all of SEI's management concerns have been addressed. For example, SEI advised VBA to ensure that stakeholders take ownership responsibility for the project, including the total system and process operating costs; however, although VBA is tracking costs incurred by contractors, it is not yet tracking and reporting in-house costs incurred by the project. Further, although the project has improved its management processes, such as establishing a process to manage and stabilize system requirements, it has not yet developed processes for capacity planning and management. This will be important for ensuring that further VETSNET development does not lead to delays and slowdowns in processing of benefits. In addition, although the project has established certain performance measures, it has not yet established results-oriented measures for productivity and user satisfaction, both of which will be important for measuring progress. Finally, the process improvements that VBA has incorporated in the replacement initiative remain in draft and have not been established through documented policies and procedures. If VBA does not institutionalize these improvements, it increases the risk that they may not be maintained through the life of the project or be available for application to other development initiatives. After more than 10 years of effort, including the recent management, organizational, and process improvements, VBA has developed critical functionalities needed to process and pay certain original compensation claims using the replacement system, but it remains far from completing the project. According to VBA officials, all five of the major software applications that make up the new system are now being used to establish and process new compensation claims for veterans. In total, the replacement system is currently providing monthly compensation payments to almost 50,000 veterans (out of about 3 million veterans who receive such payments); the system was used to process about 83 percent of all new compensation claims completed in March 2007. Nonetheless, the system requires further development before it can be used to process claims for the full range of compensation and pension benefits available to veterans and their dependents.
Enacted on March 23, 2010, PPACA involves major health care stakeholders, including federal and state governments, employers, insurers, and health care providers, in an attempt to reform the private insurance market and expand health coverage to the uninsured. IRS is one of several agencies accountable for implementing the legislation and has responsibilities pertaining to 47 PPACA provisions. Some provisions took effect immediately or retroactively while others are to take effect as late as 2018. According to IRS officials, the most challenging of these provisions relate to the health care exchanges to be established by states by 2014. These exchanges are marketplaces for individuals and certain types of employers to purchase health insurance. To support the exchanges, IRS must modify existing or design new IT systems that are capable of transmitting data to and from HHS, help HHS craft eligibility determinations and related definitions, and engage in new interagency coordination, such as with HHS and the Department of Labor. To coordinate agency-wide efforts, a PPACA Executive Steering Committee (ESC) oversees two Program Management Offices (PMOs) that coordinate with Health Care Counsel—which is part of IRS’s Office of Chief Counsel (Counsel)—on the implementation. The Services and Enforcement (S&E) PMO oversees the work completed within IRS’s existing business operating divisions (BOD) as well as the efforts of four workstream teams. The Modernization, Information Technology and Security Services (MITS) PMO leads IT development for the program. The Health Care Counsel provides legal counsel and guidance (see fig. 1). Management of the implementation teams is expected to shift from the program management office to the business operating divisions, MITS, and Counsel as the program is fully implemented. The program management offices and business operating divisions, along with overall IRS leadership, coordinate with IRS’s Office of the Chief Financial Officer (CFO) to allocate resources for implementation efforts. Implementation costs are expected to reach $881 million through fiscal year 2013, with $521 million of that amount being provided through HHS’s Health Insurance Reform Implementation Fund (HIRIF), a fund to which Congress appropriated $1 billion for federal spending to implement PPACA, and the remainder from IRS’s 2013 budget request. Table 1 shows IRS’s PPACA budget and HIRIF funding amounts. IRS’s risk management efforts are crucial in implementing a program of this size. By evaluating the probability and impact of a given risk’s occurrence, risk management encourages planning for ways to lessen the probability or minimize the impact. Much of the remaining implementation work is new to IRS, such as that related to health care exchanges. IRS is more likely to succeed with steps in place to identify and address risks before they occur and make contingency plans for events that cannot be controlled. Though not a guarantee, IRS’s planning for these tasks make successful implementation more likely. Over half of the 47 provisions requiring action from IRS were statutorily effective in or prior to 2010, forcing IRS to conduct short-term implementations and long-term strategic planning simultaneously. With many short-term projects now completed, IRS has been focusing on its long-term planning since our 2011 report, and has made varying degrees of progress in implementing our four recommendations. These efforts have helped IRS gain a better understanding and vision for the implementation work and challenges remaining and how IRS would manage risks to the program’s success. IRS has implemented one of our four recommendations from June 2011 to strengthen PPACA implementation efforts by documenting a schedule for developing performance measures for PPACA that are to link to program goals (see table 2). IRS made some progress on the remaining three recommendations from our June 2011 report. Absent more progress, IRS may encounter challenges in overseeing the program if activities in project plans are not linked, cost estimates are not current, and risk mitigation strategies are not properly assessed and decisions documented. We recommended that IRS develop one set of goals and an integrated project plan across IRS to clarify the vision and mitigate the risk that lower level units may work at cross purposes. The program’s governance document now stipulates program goals that align with IRS’s mission. IRS continues to maintain separate project plans for S&E and MITS activities, though it has an additional plan that offers a high level overview of the major PPACA efforts and the related implementation progress across IRS. IRS officials said that the overview provides a sufficient perspective to assess overall progress, but we found it did not align with criteria for leading practices because it is updated manually, leaving it subject to error if those updating the plan are not acting in a timely manner or overlook a change in delivery schedules (see table 3). We recommended that IRS adopt the leading practices outlined in the GAO Cost Guide and shown in table 4 to enhance the reliability of its cost estimate for PPACA. However, little progress has been made as IRS’s cost estimate is largely unchanged since it was developed in 2010. IRS’s Estimating Program Office (EPO) plans to revise the cost estimate this year after reaching a milestone that clarified some business requirements related to IT development. In April 2012, IRS awarded a contract for an independent cost estimate that is slated to include the steps outlined in GAO’s Cost Guide. Our June 2012 report on IRS’s fiscal year 2013 budget recommended that IRS revise its PPACA cost estimate by September 2012, which IRS agreed to do. If IRS’s EPO completes an estimate and it is compared to an independent estimate, IRS will make significant progress in implementing our recommendation. We recommended that IRS’s plan assure that strategic-level risks are identified and that alternative mitigation strategies for risks are evaluated. Our conclusion was based on a comparison between IRS’s risk plan from May 18, 2011, and the criteria outlined in GAO’s risk management framework, shown in figure 2. Of the five stages of the risk management framework, IRS’s risk plan did not meet the criteria associated with three stages: risk assessment, alternative evaluation, and management selection (see table 5). Strategic- level risks are now better addressed because the revised plan calls for involvement of higher level executives, but the plan does not specify policies and procedures involved in evaluating and selecting potential risk mitigation strategies. We discuss this topic further in the next section on IRS’s revised risk management plan. Our assessment of IRS’s revised risk management plan from February 24, 2012, indicated that IRS adheres to the criteria for three of the five stages of our framework for risk management. However, the plan’s guidance on evaluating risk mitigation alternatives is not specific or comprehensive, nor does the plan address procedures for management in selecting strategies and documenting decisions made. Figure 3 summarizes our assessment of the IRS revised plan by comparing it to the five stages (see app. II for the full text included in fig. 3). As figure 3 indicates, the risk plan’s discussion of evaluating potential risk mitigations is brief, with some processes and responsibilities left undefined. The plan did not provide specific guidance on the process for doing an evaluation, stating only that alternative strategies should be evaluated according to cost, level of effort, and return on investment. For example, the plan did not identify who is responsible for doing or reviewing the evaluation. Further, the plan did not provide guidance on selecting mitigation strategies, including verifying that resources are available for selected strategies. IRS officials acknowledged that the plan did not include these processes and responsibilities but said that they believe that teams considered such factors when making decisions. Additionally, the plan did not provide guidance on documenting the rationale(s) for selecting one alternative over others. As a result, IRS is less likely to have a trail of analysis that explains the decisions to those who work on PPACA projects in the future. Such a trail is important, as PPACA implementation involves many people managing many tasks over a number of years and across multiple offices. In the years ahead, implementation responsibility will shift from the PMOs to staff in the BODs who may not have been involved in these decisions about the mitigations considered and chosen and may have to develop a new mitigation if the original does not work. IRS officials noted that spending resources to do a thorough evaluation and to document the rationale for decisions may not be practical for risks that have a low probability of occurring or that IRS cannot control, such as a lack of funding. While this may be true, IRS’s risk plan does not offer guidance on factors like the probability of a risk’s occurrence that could affect the level of evaluation and amount of documentation to be done. Without specific guidance on evaluating potential mitigation strategies, the likelihood decreases that teams will conduct a thorough evaluation or have a consistent basis for deciding not to do so. Our analysis indicated that IRS generally implemented its risk management plan consistently for seven of the nine provisions in our sample. These seven provisions covered responsibilities such as for premium assistance tax credits for eligible individuals purchasing health insurance coverage through state exchanges, penalties on individuals who do not have minimum essential coverage, penalties on larger employers who do not offer coverage as required, and other taxes, credits, and fees. IRS did not follow its risk management plan for two sample provisions that IRS believed primarily required legal guidance and that IRS assigned primary responsibility for implementing to Counsel. In reviewing the seven sample provisions that were expected to have relatively high dollar impacts and greater risks, we asked for evidence that IRS completed the steps prescribed by its risk plan. summarizes the steps and results we found in IRS’s implementation of the plan for the seven provisions (see app. III for detail on the sample provisions and our assessment of whether the sample provisions followed the four stages of IRS’s risk plan). Our analyses focused on whether rather than how well IRS completed the required steps. Key steps included in the stage Brainstorming sessions with relevant stakeholders; guidance from Counsel; complete and document approval of Provision Assessment Form; record identified risks in tracking software, using information from Provision Assessment Form. Results found in implementation IRS provided evidence of taking these steps. Monitor risks weekly. IRS provided evidence of a weekly review meeting for risks. Determine risk levels for each recorded risk; evaluate and select risk mitigation strategies; assign risk ownership; establish performance thresholds that offer early warning that chosen mitigation strategies do not work. Risk levels were determined; risk ownership was assigned; little evidence of mitigation strategy evaluation; provisions with earlier effective dates were more likely to have established early warning indicators. Regularly scheduled reports reviewed at meetings by IRS management committees. IRS provided evidence of taking these steps. IRS consistently completed all steps outlined in the plan’s Identification, Tracking, and Reporting stages. While some steps called for in the Resolution/Mitigation stage were consistently completed, we did not find an analysis of alternative risk mitigation strategies for several provisions in our sample. This inconsistency could stem from the lack of guidance, as previously discussed, on how to do mitigation evaluations, including documenting why a mitigation strategy is selected over the alternatives considered. As for the two sample provisions that Counsel was responsible for implementing, the risk management plan was not used. When asked about efforts to identify risks for one of the provisions, a Counsel official said that this responsibility rested with the BODs who ultimately would implement the provision. However, the S&E PMO overseeing the work in the BODs told us that Counsel was responsible for the provision’s implementation, including managing the related risks. As a result of confusion as to who should take the lead in identifying and mitigating risks for provisions in which Counsel had lead responsibility, risks may not be identified and mitigated. IRS officials acknowledged that the risk plan was not used for these provisions, noting that the provisions were not expected to have an impact on IRS operations. However, one of the two provisions, an imposition of penalties for underpayments attributable to transactions lacking economic substance, had an operational impact in areas such as tax forms, customer service, and compliance checks, indicating that the risk plan should have been used. Looking more broadly beyond the provisions in our sample, we found that IRS generally implemented its risk management plan in four crosscutting areas: (1) resource allocation, (2) collaboration with other agencies, (3) decisions to extend deadlines or provide transitional relief, and (4) challenges related to addressing compliance and burden. However, IRS did not have a formal system for managing risks when coordinating with HHS. While we noted in Table 3 that most activities in project plans were not assigned specific resources, IRS’s risk plan does facilitate knowledge sharing among the entities involved in allocating resources to the program, with the exception previously stated that it does not provide guidance on verifying that resources are available for selected mitigation strategies. The CFO, along with IRS management, allocates IRS’s appropriation to IRS teams doing the implementation work. By involving the CFO in reviewing identified risks, the risk plan ensures that the CFO is aware of any risks related to the availability of resources. Regularly scheduled meetings between the CFO and PPACA implementation leadership also serve to facilitate discussion of the risks related to resource allocation. To the extent that IRS provides more specific guidance in the risk plan on verifying resources and updates its cost estimate for PPACA implementation, IRS will enhance its ability to manage risks related to allocating resources in an efficient manner. IRS and HHS developed an informal process for regular communication on project management, consisting of meetings several times per week to monitor progress on deliverables and solicit needed input on IRS activities that affect other agencies. IRS officials expressed confidence that the informal system of coordination worked effectively. The agencies also jointly established more formal guiding principles for their implementation efforts in 2010 to clarify goals and objectives. Although IRS and HHS regularly coordinated, we did not find a formal system for managing risks threatening the agencies’ success in achieving their goals. Without a joint tracking system for risks related to the agencies’ coordinated efforts, the agencies may duplicate efforts. They could also focus on tracking implementation deadlines while losing sight of risks that pose obstacles to meeting those deadlines. We found consistent evidence IRS had taken steps to identify potential compliance challenges. IRS used its Research, Analysis, and Statistics (RAS) organization to help project the volume of tax returns that would be subject to PPACA and help identify the likely population requiring outreach and education. When historical data for similar provisions were available, IRS attempted to use the data to construct a baseline of anticipated results. Counsel solicited formal comments from stakeholders and taxpayers in response to preliminary guidance. IRS made limited use of other means, such as focus groups, to gain insight into compliance and burden challenges facing the public. IRS officials said that they received informal feedback from conversations with other tax stakeholders, such as groups representing taxpayers, tax software developers, and tax preparers. We also saw evidence, such as with tax credits for small employers offering health insurance, that IRS enforcement staff attempted to account for known or suspected compliance risks. The risk plan calls for early warning thresholds that indicate that results are below expectations and we saw evidence that such thresholds are used regularly. Since our 2011 report, IRS has gained a better understanding of the work and challenges it faces in implementing PPACA. IRS has made varying degrees of progress in implementing our recommendations from 2011. As IRS continues to implement them, IRS leadership will enhance its line of sight over its progress and the challenges that remain. With expected implementation costs approaching $1 billion as IRS gets closer to major milestones in 2014, careful consideration of risks and alternatives for mitigating those risks is crucial in meeting deadlines and making the best use of taxpayer dollars. While IRS developed a risk management plan for PPACA implementation that meets several leading practices, IRS did not take any actions to implement our 2011 recommendation on assessing mitigation strategies. Further, IRS could take specific steps such as providing additional guidance on how to evaluate potential mitigation strategies and document the rationales for decisions made. Without additional guidance, IRS staff selecting mitigation strategies may not fully evaluate all alternatives or verify that resources are available for the strategy chosen. Not knowing the rationale behind selecting a mitigation strategy over others could hinder future decisions if the original strategy did not work and the original decision makers are no longer involved. While IRS’s PPACA implementation teams generally followed the steps of the risk management plan in identifying and mitigating risks, the plan was not followed when Counsel led pieces of the implementation. If the plan is not followed, risks may not be addressed. Additionally, without a shared system for tracking and monitoring risks with partner agencies, such as HHS, the agencies will be more likely to overlook potential challenges or duplicate efforts to mitigate risks. To strengthen the PPACA risk management plan, we recommend that the Commissioner of Internal Revenue enhance guidance on evaluating risk mitigation alternatives to clarify who is responsible for doing the evaluation and making decisions based on the results as well as how they might do the evaluation, assure that resources are available for the chosen mitigation strategy, document the mitigation alternatives considered and rationale(s) for the decisions made. To ensure more consistent implementation of the risk management plan, we recommend that the Commissioner of Internal Revenue take the following two actions: ensure that the PPACA risk management plan is applied to provisions in which the Office of Chief Counsel assumes lead responsibility for implementation, and develop agreements with HHS (and other external parties as needed) on a system to record and track details on decisions made or to be made to ensure that risks are identified and mitigated. In a June 1, 2012, letter responding to a draft of this report (which is reprinted in app. IV), the IRS Deputy Commissioner for Services and Enforcement provided comments on our findings and recommendations as well as information on IRS efforts and progress to date on its PPACA implementation. IRS agreed with our first recommendation to enhance guidance in its PPACA risk management plan related to evaluating risk mitigation alternatives. Specifically, IRS agreed to revise its plan to (1) clarify responsibilities for doing the evaluation and making related decisions, (2) assure that resources are available for the mitigation strategy chosen, and (3) document the alternatives considered and the rationale(s) for decisions made. IRS also agreed with our two recommendations to ensure more consistent application of its risk management plan. First, IRS agreed to revise its plan to address the use of the plan for provisions being led by the Office of Chief Counsel. Second, IRS agreed to consult with HHS on the best approach to document and track decisions, risks, or both that affect both agencies. In that this recommendation referenced HHS specifically and possibly other external parties in identifying and mitigating these “joint” risks, we encourage IRS to take similar coordinated steps, as needed, when risks arise that affect IRS and these other parties. We are sending copies of this report to appropriate congressional committees, the Commissioner of Internal Revenue, the Secretary of the Treasury, the Chairman of the IRS Oversight Board, and the Director of the Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions or wish to discuss the material in this report further, please contact me at (202) 512-9110 or at whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To assess IRS’s progress in addressing our 2011 recommendations for improving PPACA implementation efforts, we compared IRS’s planned and ongoing actions to leading practices described in our report. We analyzed IRS documentation and data, including program goals, project plans, cost estimates, risk management plans, governance plan, and presentations. We interviewed IRS officials and staff at IRS’s National Office, including those in the Office of the Chief Financial Officer (CFO); Office of Chief Counsel; and Services & Enforcement (S&E) and Modernization and Information Technology Services (MITS) Program Management Offices (PMO) to clarify our understanding of IRS’s progress and plans for implementing our recommendations. To assess IRS’s risk management plan for PPACA, we compared the contents of IRS’s Risk Management Plan, governance plan, and high- level action plans to the criteria outlined by GAO’s risk management approach. We met with officials from the S&E PMO to confirm our understanding of the policies and procedures included in IRS’s risk management process. To evaluate how consistently IRS applies its risk management plan for PPACA implementation, we analyzed IRS activities across a sample of PPACA provisions to verify that IRS followed the steps included in its risk plan. To assemble our sample, we identified provisions with the greatest likelihood of adverse effects and potential for the most significant financial consequences if risks were not identified and mitigated. We limited the scope of our sample to the 23 provisions with anticipated revenue and expenditure impacts of over $1 billion over the first 10 years of the legislation, as scored by the Joint Committee of Taxation and Congressional Budget Office. We eliminated 14 provisions to arrive at the final sample of 9 provisions based on the following criteria (see app. III for the 9 provisions in the sample). For example, since we focused on IRS’s use of its PPACA risk plan, which was initially drafted in 2011, we removed six provisions, including: Four provisions that were implemented prior to the existence of IRS’s risk plan: Section 10909 related to an adoption tax credit, Section 1408 (HCERA) related to the exclusion of cellulosic biofuel from a tax credit, Section 9003 related to repealing a tax exclusion in health flexible spending arrangements, and Section 9004 related to a tax on distributions from certain health savings accounts. Two provisions for which implementation had not started: Section 9005 related to the limits on health flexible spending Section 9001 related to an excise tax on high-cost employer- provided health insurance plans. To target provisions with the greatest likelihood of adverse effects from a failure to mitigate risks, we removed another seven provisions, including: Three provisions because IRS had identified only low level risks for them: Section 9013 related to the medical expense deduction threshold, Section 1405 (HCERA) related to an excise tax on medical Section 9012 related to the elimination of an employer deduction for a retiree prescription drug subsidy. Four provisions for which only 1 risk had been identified: Section 1322 related to a tax exemption for start-up nonprofit Section 6301 related to a fee on health insurance plans, Section 10907 related to an excise tax on tanning salon services, Section 9010 related to an annual fee on health insurers. Finally, because of overlap in the remaining provisions that required very similar work for IRS, we removed a provision from Section 9015 related to an increase of the Hospital Insurance tax on wages over a specified threshold. We asked IRS to provide evidence of its risk management activity in four key areas. For three of these areas—resource allocation, coordination with external partners, and compliance and burden challenges—we also sought this documentation as part of our work on the nine provisions. We analyzed IRS’s responses and documentation, including risk logs, to determine what gaps, if any, existed between the steps called for by the risk plan and the actions that IRS took. We interviewed IRS officials and staff responsible for PPACA implementation, including officials from the PMOs for S&E and MITS, Office of the Chief Counsel, and Office of the CFO, and officials from the Department of Health and Human Services in conducting this work. For the risks related to the fourth key area—deadline extensions and other transitional relief—we interviewed officials in the Office of Chief Counsel. We sought information on their approach to understand how Chief Counsel coordinates with implementation teams about risks as decisions are considered and made about the extensions and relief. We conducted this performance audit from August 2011 to June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To assess how IRS’s revised risk plan meets the criteria for each of GAO’s risk management framework stages, we compared the criteria for each stage of the framework to the steps included in each of the stages of IRS’s risk management plan. Table 7 shows how IRS’s risk management plan meets the criteria for the risk management framework. In evaluating IRS’s responses to a sample of nine PPACA provisions, we found that IRS generally followed the plan to identify, track and report risks. As discussed in our report, exceptions were (1) IRS did not consistently evaluate potential risk mitigation strategies in the Resolution/Mitigation stage of its risk plan, and (2) the risk plan was not used when the Office of Chief Counsel led the implementation of provisions related to a reinsurance program for early retirees and the economic substance doctrine. Table 8 shows the results of our evaluation. In addition to the to the individual named above, Thomas Short, Assistant Director; Ben Atwater; Linda Baker; Amy Bowser; Dean Campbell; Jennifer Echard; Rebecca Gambler; Meredith Graves; Sairah Ijaz; Sherrice Kerns; Donna Miller; Patrick Murray; Sabine Paul; and Cynthia Saunders made key contributions to this report. IRS 2013 Budget: Continuing to Improve Information on Program Costs and Results Could Aid in Resource Decision Making. GAO-12-603. Washington, D.C.: June 8, 2012. Small Employer Health Tax Credit: Factors Contributing to Low Use and Complexity. GAO-12-549. Washington, D.C.: May 14, 2012. Patient Protection and Affordable Care Act: IRS Should Expand Its Strategic Approach to Implementation. GAO-11-719. Washington, D.C.: June 25, 2011. GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. GAO-09-3SP. Washington, D.C.: March 2, 2009. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005.
PPACA is a significant effort for IRS, with expected costs of $881 million from fiscal years 2010 to 2013 and work planned through 2018. To implement PPACA, IRSmust work closely with partner agencies to develop information technology systems that can share data with other agencies. Additionally, IRS is responsible for providing guidance to taxpayers, employers, insurers, and others to ensure compliance with new tax aspects of the law. Furthermore, it will be important for IRS to have systems to consistently identify, assess, mitigate, and monitor potential risks to the program’s success. As requested, this report (1) describes IRS’s progress in addressing GAO recommendations from June 2011 on PPACA implementation, (2) assesses IRS’s revised risk management plan, and (3) assesses how IRS applies its plan in practice. GAO compared IRS’s revised risk plan to GAO’s criteria for risk management and selected 9 provisions of the law in which IRS had a role to determine whether IRS used the risk plan consistently. Because selection focused on provisions that had the most risks and highest dollar impacts, the results are not generalizable but are relevant to how IRS managed risks. The Internal Revenue Service (IRS) has implemented one of GAO’s four recommendations from June 2011 to strengthen the Patient Protection and Affordable Care Act (PPACA) implementation efforts by scheduling the development of performance measures for the PPACA program. IRS has made varying degrees of progress on the other three recommendations: develop program goals and an integrated project plan; develop a cost estimate consistent with GAO’s published guidance; and assure that IRS’s risk management plan identifies strategic level risks and evaluates associated mitigation options. IRS’s revised risk management plan meets three of five criteria for risk management plans, but the plan does not have specific guidance for evaluating and selecting potential risk mitigation options, such as how to identify who conducts and reviews the analysis, determine the availability of resources for a given strategy, and document for future users the rationale behind decisions made. IRS applied its risk management plan when identifying, tracking, and reporting on implementation risks. Although the risk plan calls for risk mitigation strategies to be evaluated, these evaluations have not been done. IRS officials said that evaluating these strategies would require varying levels of effort because the probability and magnitude of risks differ. However, the plan was silent on this point; it provided no guidance as to when and to what extent an evaluation should be done. Without evaluating potential strategies, IRS may not consider critical factors that impact the program’s success. IRS’s risk management plan was not used when IRS’s Office of Chief Counsel was responsible for implementing two provisions GAO reviewed. Although these provisions primarily required legal counsel and guidance, IRS officials said that one of the provisions also affected IRS operations and could have risks that need to be managed. Additionally, GAO did not find evidence that a risk plan was used to track and mitigate risks when coordinating with partner agencies, such as the Department of Health and Human Services. Without a system for tracking shared risks, IRS is more likely to overlook risks or duplicate efforts. GAO recommends that IRS (1) enhance its guidance on evaluating risk mitigation alternatives and documenting decisions, (2) use a risk management plan for work led by its Office of Chief Counsel, and (3) develop agreements with external parties to record and track risks that threaten shared goals and objectives. IRS officials agreed with all of GAO’s recommendations.
TSA assumed primary responsibility for implementing and overseeing the security of the nation’s civil aviation system following the terrorist attacks of September 11, 2001. This includes regulating and providing guidance for airports’ and air carriers’ actions and performing its own actions to maintain and improve the security of their perimeters and access controls as well as establishing and implementing measures to reduce the security risks posed by aviation workers. As of March 2015, TSA had 80 FSDs who oversee the implementation of, and adherence to, TSA requirements at approximately 440 commercial airports nationwide. As the regulatory authority for civil aviation security, TSA inspects airports, air carriers, and other regulated entities to ensure they are in compliance with federal aviation security regulations, TSA-approved airport security programs, and other requirements. For a list of federal requirements pertaining to perimeter and access control security, see appendix II. TSA oversees security operations at airports through compliance inspections, covert testing, and vulnerability assessments to analyze and improve security, among other activities. In general, TSA funds its perimeter and access control security-related activities out of its annual appropriation. TSA also does not generally provide funds directly to airport operators for perimeter and access control security efforts. Funding to address perimeter and access control security needs may, however, be made available to airport operators through other sources, including the Federal Aviation Administration’s (FAA) Airport Improvement Program. Airport operators have direct responsibility for implementing security requirements in accordance with their TSA-approved airport security programs. Airport security programs generally cover the day-to-day aviation operations and implement security requirements for which commercial airports are responsible, including the security of perimeters and access controls protecting security-restricted areas. Among other things, these security programs include procedures for performing background checks on aviation workers and applicable training programs for these workers. Further, airport security programs must also include descriptions of the security-restricted areas—that is, areas of the airport identified in their respective security programs for which access is controlled and the general public is generally not permitted entry— including a map detailing boundaries and pertinent features of the security-restricted areas. Although, pursuant to regulatory requirements, the components of airport security programs are generally consistent across airports, the details of these programs and their implementation can differ widely based on the individual characteristics of airports. TSA generally characterizes airport perimeter security at commercial airports to include protection of the fence line—or perimeter barriers— vehicle and pedestrian gates, maintenance and construction gates, and vehicle roadways, as well as general aviation areas. Access control security generally refers to security features that control access to security-restricted areas of the airport that may include baggage makeup areas, catering facilities, cargo facilities, and fuel farms. Specifically, airport perimeter and access control security measures are designed to prevent unauthorized access onto the airport complex and into security- restricted areas. For example, airport operators determine the boundaries for the security-restricted areas of their airport based on the physical layout of the airport and in accordance with TSA requirements. Security programs for commercial airports generally identify designated areas that have varying levels of security, known as secured areas, security identification display area (SIDA), Air Operations Area (AOA), and sterile areas (referred to collectively in this report as “security-restricted areas”). For example, passengers are not permitted unescorted access to secured areas, SIDAs, or the AOA, which typically encompass baggage loading areas, areas near terminal buildings, and other areas close to parked aircraft and airport facilities, as illustrated in figure 1. Aviation workers may access the sterile area through the security checkpoint (at which time they undergo screening similar but not identical to that experienced by a passenger) or through other access points secured by the airport operator in accordance with its security program. Airport operators are responsible for safeguarding their perimeter barriers, preventing and detecting unauthorized entry, and conducting background checks of workers with unescorted access to secured areas. Methods used by airports to control access through perimeters or into security-restricted areas vary because of differences in the design and layout of individual airports, but all access controls must meet minimum performance standards in accordance with TSA requirements. These methods typically involve the use of one or more of the following: pedestrian and vehicle gates; keypad access codes using personal identification numbers, magnetic stripe cards and readers; biometric (e.g., fingerprint) readers; turnstiles; locks and keys; and security personnel (e.g., guards). TSA requires FSDs or their designees to report security events that occur both at the airports for which they are responsible and on board aircraft destined for their airports. TSA collects airport security event data from airports and stores that information in numerous systems. In addition to PARIS, TSA uses the Security Incident Reporting Tool (SIRT)—a tool designed for root cause analysis, among other things—to record security event information in the field (e.g., at airports). Events that TSA reports to these systems may include a range of occurrences, such as an inebriated driver crashing through a perimeter fence or a baggage handler using his access to smuggle drugs to outbound passengers. In October 2012, TSA updated its operations directive related to reporting security events. While TSA redefined some event categories and expanded the overall number of event categories, these event categories are not specific for events that relate to perimeter or access control. As a result, the number of events that directly relate to perimeter and access control security may be over- or under-represented in any analysis. Figure 2 shows the estimated number of events potentially related to perimeter and access control security from fiscal years 2009 through 2015, by fiscal year. Since it is not feasible to protect all assets and systems against every possible threat, DHS uses a risk management approach to prioritize its investments, develop plans, and allocate resources in a risk-informed way that balances security and commerce. Risk management calls for a cost-effective use of resources and focuses on developing and implementing protective actions that offer the greatest mitigation of risk for any given expenditure. A risk management approach entails a continual process of managing risk through a series of actions, including setting goals and objectives, assessing risk, evaluating alternatives, selecting initiatives to undertake, and implementing and monitoring those initiatives. DHS developed the NIPP, which establishes a risk management framework to help its critical infrastructure stakeholders determine where and how to invest limited resources. In accordance with the Homeland Security Act of 2002 and Homeland Security Presidential Directive/HSPD-7, DHS released the NIPP in 2006, which it later updated in 2009 and 2013. The NIPP risk management framework includes setting goals and objectives, identifying infrastructure, assessing and analyzing risks, implementing risk management activities, and measuring effectiveness. This framework constitutes a continuous process that is informed by information sharing among critical infrastructure partners. See figure 3 for these elements of critical infrastructure risk management. The NIPP sets forth risk management principles that include a comprehensive risk assessment process that requires agencies to consider all elements of risk—threat, vulnerability, and consequence. These elements are defined as the following: Threat is a natural or manmade occurrence, individual, entity, or action that has or indicates the potential to harm life, information, operations, the environment, and/or property. For the purpose of calculating risk, the threat of an intentional hazard is generally estimated as the likelihood of an attack being attempted by an adversary. The threat likelihood is estimated based on the intent and capability of the adversary. Vulnerability is a physical feature or operational attribute that renders an entity open to exploitation or susceptible to a given hazard. In calculating the risk of an intentional threat, a common measure of vulnerability is the likelihood that an attack is successful, given that it is attempted. Consequence is the negative effect of an event, incident, or occurrence. Since 2009, TSA has made progress in assessing all three components of risk—threat, vulnerability, and consequence—partly in response to our 2009 recommendations. Specifically, in May 2013, TSA developed its Comprehensive Risk Assessment of Perimeter and Access Control Security (Risk Assessment of Airport Security). This assessment was based primarily on information from other TSA efforts to assess airport security risks or components of risk, such as the TSSRA TSA issued in 2013, JVAs TSA conducted with the FBI at select airports in fiscal year 2011, and a Special Emphasis Assessment conducted in September 2012. The TSSRA, TSA’s annual report to Congress on transportation security, establishes risk scores and assesses threat, vulnerability, and consequence for various attack scenarios to all modes within the transportation sector, including domestic aviation. TSA’s Office of Law Enforcement/Federal Air Marshal Service (OLE/FAMS) conducts JVAs every 3 years at commercial airports in the United States identified as high risk (referred to as “triennial” airports) and on a case-by-case basis for other commercial airports. TSA’s Office of Security Operations led the Special Emphasis Assessment—a national assessment of a particular aviation security area of emphasis—specifically for the Risk Assessment of Airport Security. For each component of risk, TSA has taken the following broad assessment actions that include actions related to airport perimeter and access control security: Threat. TSA has assessed threats of various attack scenarios for domestic aviation through the TSSRAs, the latest version of which TSA released to Congress in July 2015. In this version of the TSSRA, TSA identified numerous attack scenarios related to domestic aviation, including airport security. These scenarios include threat scores, which TSA estimated on the basis of intent and capability of al-Qaeda, its affiliates, and its adherents as the expected adversary. In addition, as part of the JVA process, the FBI produces a threat intelligence report. According to FBI officials, they provide this report to the airports’ FSDs and make the reports available to the JVA teams prior to the scheduled JVA. Vulnerability. TSA officials stated that their primary measures for assessing the vulnerability of commercial airports to attack—including assessing security of perimeter and access controls—is the JVA process and professional judgment. TSA has increased the number of JVAs conducted at commercial airports since 2009. In 2009, we reported that TSA conducted JVAs at 57 (13 percent) of the nation’s then approximately 450 commercial airports since fiscal year 2004. As of the end of fiscal year 2015, TSA conducted JVAs at 81 (about 19 percent) of the 437 airports since fiscal year 2009. In addition, TSA has also assessed the vulnerability of airports through the TSSRAs by assigning numerical values to the vulnerability of each attack scenario and related countermeasures based on the judgements of TSA and other subject matter experts, such as airport officials. Consequence. TSA has assessed consequence through the TSSRAs by analyzing both direct and indirect consequences of the various attack scenarios related to domestic airports. According to the TSSRAs, direct consequences (or impacts) include the immediate economic damage following an attack that includes infrastructure replacement costs, deaths, and injuries. Indirect consequences are the secondary macro- and micro-economic impacts that may include the subsequent impact on supply chains, loss of revenues, consumer behaviors, and other downstream costs. To further address components of risk, TSA established an integrated project team in the summer of 2015 to plan for the development of a compliance-based risk assessment. The intent of this effort, according to TSA officials, is to leverage compliance inspections findings as well as other assessment data to yield a risk level that incorporates threat, vulnerability, and consequence for all regulated airports and other entities. TSA officials stated that this new planned effort will differ from the Risk Assessment of Airport Security in that they intend it to be an ongoing process that will be more operational in nature. TSA officials stated that this effort is in its infancy, and will not be developed and implemented until at least fiscal year 2018. While TSA released its Risk Assessment of Airport Security in May 2013, it has not updated this assessment to reflect changes in the airport security risk environment or routinely shared updated national risk information with airports or other stakeholders. Specifically, TSA based its Risk Assessment of Airport Security primarily on information from the TSSRA submitted to Congress in May 2013, JVAs conducted in fiscal year 2011, and a Special Emphasis Assessment conducted in September 2012. However, since completion of its Risk Assessment of Airport Security in 2013, TSA has not updated it with information TSA submitted to Congress in the July 2014 and July 2015 versions of the TSSRA. TSA updated these versions of the TSSRA to include additional attack scenarios related to domestic aviation and to reflect an increase in threat scores across all modes of transportation. In the July 2015 TSSRA, TSA stated that one scenario would relate directly to airport security. Furthermore, TSA expanded the 2014 and 2015 TSSRA versions to assess risk from the insider threat. In the latest TSSRA assessment of insider threat, TSA stated that although all domestic aviation-specific scenarios (presented in the July 2014 TSSRA) could be executed without insider support, approximately 65 percent of the attack scenarios would be more easily facilitated by a TSA insider. In this version of the TSSRA, TSA also reported that it should extend the concept of insider threat beyond the TSA workforce to all individuals with privileged access—e.g., aviation workers. TSA also conducted 72 JVAs in fiscal years 2012 through 2015, the results of which are not reflected in TSA’s May 2013 Risk Assessment of Airport Security. Further, TSA’s 2013 assessment relied upon results from a Special Emphasis Assessment that was specifically conducted over a 2-week period in September 2012 to gather physical security data for the Risk Assessment of Perimeter Security from Category X, I, II, and III airports (approximately 67 percent of the about 440 commercial airports and accounting for all airports required by TSA to have a complete security program). As part of the Risk Assessment of Airport Security, TSA discussed ongoing actions to share summary information from this assessment with airport FSDs through email and with airport operators on its external website’s communications board. This summary included high-level information related to perimeter and access control security that was based on the 2013 version of the TSSRA, 24 JVAs TSA conducted in fiscal year 2011 at select airports, and the Special Emphasis Assessment that TSA conducted at selected airports in September 2012. For example, TSA shared the perimeter components that had the highest number of JVA findings and the Special Emphasis Assessment’s topics of concern. According to TSA officials, TSA has not continued to share updated summary information in this format from the JVAs conducted since fiscal year 2011 or from the additional Special Emphasis Assessment related to perimeter and access control security that TSA conducted in November 2012 with airport operators on a broad scale. The NIPP states that effective risk management calls for updating assessments of risk and its components as pertinent information becomes available. The NIPP also states that agencies should share actionable and relevant information across the critical infrastructure community—including airport operators—to build awareness and enable risk-informed decision-making as these stakeholders are crucial consumers of risk information. Further, Standards for Internal Control in the Federal Government states that agencies should identify, analyze, and respond to changes and related risks that may impact internal control systems as part of its risk assessment process, and agencies should communicate with external parties so that these parties may help the agency achieve its objectives and address related risks. TSA officials have acknowledged that they have not updated the Risk Assessment of Airport Security and do not have plans or a process to update it, or share updated summary information, such as information from JVAs and Special Emphasis Assessments, with airport operators on an ongoing basis. TSA officials agreed that updating the JVA summary information to share with airport operators and other stakeholders, for example, could be useful. However, TSA officials also stated that TSA does not have an effective JVA information collection tool that allows systematic analysis that could enable TSA to share summary information readily with airport operators on an ongoing, continuous basis. Further, TSA officials said they do not have a process for determining when additional updates to the Risk Assessment of Airport Security are needed or when the updated information should be shared. TSA’s Chief Risk Officer stated that TSA currently determines when and how to update its assessments based on judgment and that TSA should update its Risk Assessment of Airport Security to reflect new information regarding the risk environment. The Chief Risk Officer agreed that TSA’s oversight of airport security could benefit from updating and sharing the Risk Assessment of Airport Security on an ongoing basis as well as establishing a process for determining when additional updates are needed. TSA officials stated that they perceive the Risk Assessment of Airport Security as its primary mechanism for cohesively addressing perimeter and access control security risk issues and sharing that summary information with stakeholders. Further given the changes in the risk environment reflected in the latest versions of TSSRA, including the insider threat, and the additional JVAs and the Special Emphasis Assessment, TSA’s Risk Assessment of Airport Security is not up-to-date. According to both TSA and the FBI, the insider threat is one of aviation security’s most pressing concerns. Insiders have significant advantages over others who intend harm to an organization because insiders may have awareness of their organization’s vulnerabilities, such as loosely enforced policies and procedures, or exploitable security measures. The July 2015 TSSRA states that approximately 65 percent of the domestic aviation-specific attack scenarios would be more easily facilitated by a TSA insider. As such, updating the Risk Assessment of Airport Security with TSSRA information that reflects this current, pressing threat as well as with findings from JVAs already conducted, future Special Emphasis Assessments, and any other TSA risk assessment activities would better ensure TSA is basing its risk management decisions on current information and focusing its limited resources on the highest-priority risks to airport security. Sharing information from the updated Risk Assessment of Airport Security with airport security stakeholders on an ongoing basis, including any broader findings from JVAs or Special Emphasis Assessments conducted to date, may enrich airport operators’ understanding of and ability to reduce vulnerabilities identified at their airports. Furthermore, establishing a process for determining when additional updates to the Risk Assessment of Airport Security are needed would ensure that future changes in the risk environment are reflected in TSA’s mechanism for culminating and sharing risk information related to perimeter and access control security. TSA has not comprehensively assessed the vulnerability of airports system-wide through its JVA process—its primary measure for assessing vulnerability at commercial airports. In 2009, we recommended that TSA develop a comprehensive risk assessment for airport perimeter and access control security. As part of that effort, we recommended that TSA should evaluate whether its then current approach to conducting JVAs reasonably assessed vulnerabilities at airports system-wide, and whether an assessment of security vulnerabilities at airports nationwide should be conducted. TSA officials stated in response to our recommendation that its approach to conducting JVAs appropriately assessed vulnerabilities, but a future nationwide assessment of all airports’ vulnerability would be appropriate to improve security. Since our 2009 report, TSA conducted JVAs at 81 commercial airports (approximately 19 percent of the roughly 440 commercial airports nationwide) from fiscal years 2009 through 2015; however, the majority of these airports were either Category X or I airports. TSA officials stated that TSA primarily limits the JVAs it conducts on a routine, triennial basis to 34 Category X through II airports that the Federal Aviation Administration (FAA) determined to be high risk based on a variety of factors. In addition to the triennial airports, TSA has selected other airports for JVAs at the direction of DHS or TSA senior leadership or at the request of the FBI. See table 1 for the number and percent of JVAs conducted by airport category. TSA does not include Category III and IV airports in the triennial JVA process. Further, TSA has conducted 5 JVAs at Category III airports and has not conducted any JVAs at Category IV airports, both of which make up approximately 62 percent of commercial airports system-wide. Our analysis of PARIS event data shows that these airports have experienced security events potentially related to perimeter and access control security, which may demonstrate vulnerabilities to airport security applicable across smaller airports system-wide. We found that over 1,670 events, or approximately 9.4 percent of total events that we analyzed over the time period, occurred at Category III and IV airports since fiscal year 2009. These events included, for example, individuals driving cars through or climbing airports’ perimeter fences and aviation workers allowing others to follow them through airport access portals against protocol. The NIPP requires a system-wide—or nationwide—assessment of vulnerability to inform a comprehensive risk assessment and an agency’s risk management approach. The NIPP supplement states that risk assessments may explicitly consider vulnerability in a quantitative or qualitative manner, and must consider and address any interdependencies between how the vulnerabilities and threats were calculated. Also, the vulnerability assessment may be a standalone product or part of a full risk assessment and is to involve the evaluation of specific threats to the asset, system, or network in order to identify areas of weakness that could result in consequences of concern. In our 2004 and 2009 reports, TSA officials told us that a future nationwide vulnerability assessment would improve overall domestic aviation security. While TSA has since expanded the number of airports at which it has conducted JVAs (increasing from 13 percent through fiscal year 2008 to about 19 percent through fiscal year 2015), TSA officials stated that they have not conducted analyses to determine whether the JVAs are reasonably representative to allow for some system-wide judgment of commercial airports’ vulnerability. Therefore, they cannot ensure that the JVAs represent a system-wide assessment and provide a complete picture of vulnerabilities at all airports, including for those airports categorized as III or IV. TSA officials stated that they are limited in the number of JVAs they conduct because of resource constraints. Specifically, officials stated that JVAs are resource intensive—typically requiring 30 days of advance preparation, about one week for a team of 2 to 5 staff to conduct the JVA, and 60 days to finalize the written report. Further, TSA officials stated that they have limited resources to conduct JVAs above and beyond the 34 triennial airports. As a result, TSA officials said the agency has conducted JVAs at 4 to 6 additional airports per year beyond the triennial airports they identified as high risk and, therefore, have not been able to conduct JVAs at all airports system-wide. FBI officials stated that they defer to TSA on whether to increase the number of JVAs, and have previously been able to manage increases in the number conducted without a significant strain on FBI’s resources. In 2010, TSA developed an airport self-vulnerability assessment tool; however, TSA’s policy is to provide the tool to airports already selected for a JVA in order to inform their assessment, and has not provided it to all airports as a means to inform TSA’s selection of airports for JVAs or to assess vulnerability in lieu of a JVA. Further, in 2011, TSA developed and deployed the Airport Security Self-Evaluation Tool (ASSET) to provide airports with a tool to evaluate their current level of security and compare their activities to specific security measures identified by TSA. However, TSA officials stated that while ASSET was made available to airports on TSA’s external website’s communication board, the tool did not have widespread acceptance or use, possibly due to its technical nature since it requires use of specific software. TSA has subsequently stopped pursuing its industry-wide use. In addition to the JVAs and self-assessment tools, TSA officials stated that their regulatory compliance inspections that TSA inspectors conduct at airports at least annually augment their vulnerability assessments at individual airports. However, compliance inspections of an individual airport’s adherence to federal regulations, while helpful in potentially identifying those airports that would benefit from further vulnerability assessment, do not constitute a system-wide vulnerability assessment. Furthermore, TSA did not include the results of compliance inspections in the Risk Assessment of Airport Security or in the JVA reports. By assessing vulnerability of airports system-wide, TSA could better ensure that it has comprehensively assessed risks to commercial airports’ perimeter and access control security. The events that occurred at Category III and IV airports may not have garnered the same media attention, or produced the same consequences, as those at larger airports. However, they are part of what TSA characterizes as a system of interdependent airport hubs and spokes in which the security of all is affected by the security of the weakest one. Consequently, TSA officials stated that the interdependent nature of the system necessitates that TSA protect the overall system as well as individual assets or airports. While we recognize that conducting JVAs at all or a statistically representative sample of the approximately 440 commercial airports in the United States may not be feasible given budget and resource constraints, other approaches to assessing vulnerability may allow TSA to assess vulnerability at airports system-wide. For example, outside of the 34 triennial airports, TSA could select a sample that would reflect a broader representation of airports, including Category III and IV airports, or TSA could provide airports with self-vulnerability assessment tools, the results of which TSA could collect and analyze to inform its understanding of system-wide vulnerabilities. TSA does not analyze its security event data to monitor security events at airports for those specifically related to perimeter and access control security. TSA officials stated that PARIS—TSA’s system of record for security events—is a data repository, among other things. As such, TSA officials stated they cannot easily analyze PARIS data without broadly searching for events potentially related to perimeter and access control and then further analyzing the content of the narratives that make up the majority of the information provided in PARIS. According to officials, this type of query and content analysis of PARIS data would be a laborious and time-consuming process because the events and the associated descriptive narratives are unique, and searching for common terminology that would ensure all relevant events were captured would be challenging. For example, a search of “insider” or “employee” would not necessarily return all events that involved an insider. Because of the mostly narrative content, TSA is unable to readily identify and analyze those entries directly related to airport perimeter and access control security within PARIS. TSA officials stated that SIRT—another TSA data system in which security event information is reported—has more built-in analytical capability and could be used for broad analysis of events related to perimeter and access control security. In a 2012 review of selected airports’ reporting of and TSA’s response to security events, DHS’s Office of Inspector General (OIG) recommended that TSA use one comprehensive definition for what constitutes a security breach and develop a comprehensive oversight program to ensure security events are accurately reported and are properly tracked and analyzed for trends. In 2012, in response to the OIG’s recommendations, TSA updated its operations directive for reporting security events and developed SIRT as a temporary additional tool for, among other things, analyzing the root causes of an event. SIRT has built-in capabilities for performance analysis and reporting as well as root cause analysis, and the tool uses the same security event reporting categories and much of the same information as PARIS. TSA officials stated that SIRT has the capability to provide analysis of trends in perimeter and access control security events using more sophisticated built-in search tools. However, while TSA has the capability within SIRT to analyze events and TSA weekly SIRT reports include airport perimeter and access control security events as well as checkpoint events, TSA officials stated that TSA has not seen the need to regularly analyze these data for trends—including changes in trends—specifically in perimeter and access control events as TSA does in weekly reports with other security events, such as confiscation of prohibited items at checkpoints. Standards for Internal Control in the Federal Government states that an agency should design its information systems to respond to the entity’s objectives and risks. Agencies should design a process that uses these objectives and risks to identify information requirements that consider both internal and external users. Quality information should be appropriate, current, complete, accurate, accessible, and timely. Agency management may use this information to make informed decisions and evaluate the agency’s performance in achieving key objectives and addressing risks. TSA officials stated that PARIS was designed to be a case management system for events that resulted in regulatory violations, and was not set up to allow for detailed analysis of all types of airport security events within the system. Although TSA revised its incident reporting operations directive in 2012 to include new event categories and developed SIRT as an enhanced analytical tool to address the OIG’s findings, TSA collects much of the same information by requiring field officials to enter the same security event information in both PARIS and SIRT. As of December 2015, TSA officials stated that they are in the process of incorporating SIRT into the Airport Information Management (AIM)—a system that assists airports (and other transportation facilities) in managing day-to- day activities and includes a variety of employee and equipment information. TSA officials stated that using AIM for a single point-of-entry for security event data would be the preferred approach over duplicate data entry into SIRT and PARIS. However, TSA officials stated that the integration of SIRT into AIM is in its early stages and would occur sometime in 2016, and are unsure as to whether AIM will be the single point-of-entry for security event data. Therefore, it is unclear whether TSA’s future transition of SIRT into AIM would reduce the overlapping efforts of TSA field officials by providing a single point-of-entry. Regardless of how TSA collects security event data, using these data for specific analysis of system-wide trends related to perimeter and access control security, such as by expanding existing weekly reports to focus on perimeter and access control security events, TSA would be better positioned to use any results to inform its management of risk and assessments of risk’s components. TSA has implemented a variety of actions since 2009 to oversee and facilitate perimeter and access control security at the nation’s commercial airports, either through new activities or by enhancing ongoing efforts. (For a list of ongoing efforts TSA initiated prior to 2009 to oversee and facilitate airport security, see app. III.) Since we last reported on airport security in September 2009, TSA has taken steps to develop strategic goals and evaluate risks, enhance aviation worker screening efforts, develop airport planning and reference tools, and assess general airport security through the review and feedback of aviation stakeholders and experts. According to TSA officials, these actions have reinforced the layers of security already in place to stop a terrorist attack. The following are two actions that, according to TSA officials, have played an important role in facilitating airport perimeter and access control security. Aviation Security Advisory Committee (ASAC) recommendations. In January 2015, in the wake of the December 2014 Atlanta gun-smuggling event allegedly perpetrated by current and former airline workers, the TSA Acting Administrator requested that ASAC—a TSA advisory committee—evaluate employee access control security at commercial airports. In response, ASAC created the Working Group on Airport Access Control (Working Group), composed of various industry experts and supported by officials from TSA and the Homeland Security Studies and Analysis Institute—a federally funded research and development center—to analyze the adequacy of existing airport employee access control security measures and recommend additional measures to improve worker access controls. In April 2015, the Working Group issued a report on potential vulnerabilities related to airport employee access control security and the insider threat, recommending 28 actions to be taken across five areas: (1) security screening and inspection, (2) vetting of employees and security threat assessment, (3) internal controls and auditing of airport-issued credentials, (4) risk-based security for higher-risk populations and intelligence, and (5) security awareness and vigilance. TSA fully concurred with 26 of the 28 recommendations and partially concurred with 2. As of August 2015, TSA officials reported that the agency had implemented—closed—6 of the 26 recommendations they concurred with and had established timeframes for addressing the remaining 20 recommendations, the last of which TSA anticipates implementing in 2018. According to these officials, TSA will need to form working groups to respond to some recommendations and may find that some recommendations are infeasible. In September 2015, ASAC also issued six recommendations for addressing commercial airport perimeter security vulnerabilities. These recommendations cover four areas of action: (1) adopt select industry best practices (e.g., joint assessments of airport perimeter risk), (2) institute an airport security- focused grants program, (3) incorporate risk-based security into airport security requirements, and (4) embed perimeter security awareness training in annual airport security refresher training for aviation workers. TSA fully concurred with these recommendations. As of February 2016, TSA reported that the agency had established timeframes for addressing the recommendations, with the last recommendation to be implemented by the end of 2016. Playbook. In March 2015, TSA refocused “Playbook,” a risk-based program that authorizes FSDs to carry out random, unpredictable combinations of security operations at all areas of an airport to address real-time threats and to deter potential terrorist attacks. Playbook consists of a menu of predefined “plays”—operations that identify specific resources, activities, locations, and targets—that FSDs or TSA headquarters officials can manually or randomly select using a randomization tool. The plays are conducted by teams of TSA and non-TSA personnel. Historically, while TSA headquarters mandated that Playbook operations be conducted at specific airports, generally it allowed FSDs—in coordination with airport operators—to determine which plays to conduct. However, since March 2015, in response to the December 2014 Atlanta gun-smuggling event allegedly perpetrated by current and former airline workers, TSA headquarters has directed that a high percentage of Playbook operations focus on the insider threat, primarily through the random screening of workers, property, and vehicles. According to TSA data, from June 1 through June 30, 2015, Playbook operations identified 50 worker-related security events, such as workers attempting to gain unauthorized access to a security-restricted area, workers attempting to gain access to a restricted area with expired credentials, and workers with prohibited items. Since 2009, TSA has also developed plans for assessing and responding to risks, programs to address worker security issues, tools for airports to assess and respond to risks, guidance and reference tools for airports, and general security activities. Table 2 lists additional actions TSA has taken since 2009 in relation to perimeter and access control security. TSA has not updated its September 2012 National Strategy for Airport Perimeter and Access Control Security (Strategy) to reflect actions it has subsequently taken to assess the airport security risk environment, oversee and facilitate airport security, and address Strategy goals and objectives. The Strategy, which TSA developed in response to our 2009 recommendation, defines how the agency seeks to secure the perimeters and controlled areas of the nation’s commercial airports. As previously discussed in this report, TSA has addressed a key objective of its Strategy by developing the 2013 Risk Assessment of Airport Security, which assesses the airport security risk environment based on the 2013 TSSRA, 2011 JVA information, and a 2012 Special Emphasis Assessment. However, it has not updated the Strategy with the results of this assessment, such as vulnerability information from JVAs, results from the Special Emphasis Assessment, and the direct and indirect consequences of various attack scenarios. Further, it has not updated the Strategy with threat information from the July 2014 and 2015 versions of the TSSRA, including assessments of the risk from the insider threat and how TSA plans to address that risk, as well as the results of JVAs conducted since 2013. TSA also has not incorporated information on key airport security activities it has developed or enhanced since 2009. Two such efforts include Playbook and COMSETT, programs TSA officials have stated are key to addressing airport security risk. Additionally, TSA has not updated the Strategy with the status of its efforts to address various goals and objectives. For example, TSA’s second Strategic goal is to “promote the use of innovative and cost effective” actions for reducing risk. TSA has worked with industry representatives to identify a list of airport innovative (best practice) security measures that airports have implemented, as well as their associated costs and operational effects. The agency has also developed tools that allow airports to compare their security levels against those of other domestic commercial airports and to weigh expected costs associated with alternative security activities against expected benefits. However, TSA has yet to incorporate these developments into its Strategy. TSA also has not updated the Strategy with the status of its efforts to identify outcome-based performance measures and performance levels— or targets—for each strategic goal, against which progress can be measured, as promised in its Strategy. TSA proposed outcome-based performance measures in the Strategy for some activities, such as assigning vulnerability scores for each airport that receives a JVA, but did not identify performance targets against which progress can be measured. Moreover, TSA has not updated the Strategy with outcome- based performance measures and performance targets for other airport security-related activities, such as Playbook and COMSETT. In addition to not having measures or targets, TSA also does not have a process in place for determining when additional updates to the Strategy are needed. As we have previously reported, effective strategic plans are the foundation for defining what an agency seeks to accomplish and provide an overarching framework for communicating goals and priorities, allocating resources to inform decision making, and ensuring accountability. Strategic plans, with their goals and objectives, are also the first phase in the risk management framework, which, according to the NIPP, is to be a continuing process with iterative steps and feedback loops that share information—such as identified threats and vulnerabilities—within each element of the framework and allows decision makers to track progress and implement actions to improve security over time. Further, our prior work has shown that leading organizations use acquired knowledge and data—such as information from new activities— to report on their performance. The NIPP and other federal guidance also provide that agencies should assess whether their efforts are effective in achieving key security outcomes so as to help drive future investment and resource decisions and adapt and adjust protective efforts as risk changes. In addition, Standards for Internal Control in the Federal Government states that as programs change, management must continually assess and evaluate its internal control to assure that the control activities being used are effective and updated when necessary. TSA officials stated that as of October 2015 the Strategy had not been updated to reflect the most recent Risk Assessment of Airport Security information, new airport security-related activities, the status of goals and objectives, and outcome-based performance measures and finalized performance levels (targets) for each strategic goal. These officials agreed that updating the Strategy with this information could be useful in guiding TSA’s future airport security actions. They also stated that while they have developed output-based performance measures for many airport security-related activities and programs, they have yet to develop outcome-based performance measures and targets for these programs and other activities—including Strategy goals—due to resource and time constraints. Further, TSA officials stated that the agency does not have a process in place for determining when updates to the Strategy are needed. TSA’s Chief Risk Officer also agreed that TSA’s oversight of airport security could benefit from an updated Strategy, and noted that the agency is in the process of developing the Strategic Operational Vision, a 5- to 7-year national strategy that is to address TSA-planned actions for aviation security; the strategy is scheduled to be released in February 2016. However, the official could not say to what extent the national strategy will address perimeter and access control issues. TSA officials stated in February 2016 that they agreed the Strategy should be updated, and plan to revise it to reflect actions the agency has taken since 2012 to assess the airport security risk environment, oversee and facilitate airport security, and address Strategy goals and objectives. Officials said they did not yet have milestones or a timeframe for completing the update, however, and had not yet conducted analysis to identify the status of goals and objectives or developed targeted performance levels for relevant programs, among other things. Updating the Strategy to reflect changes in the airport security risk environment as well as new and enhanced activities TSA has taken to facilitate airport security would help TSA to better inform management decisions and focus resources on the highest-priority risks, consistent with its strategic goals. Further, updating the Strategy to identify the extent to which TSA has achieved goals and objectives would also help the agency to better assess its progress and manage limited resources to focus on areas that potentially require more attention and development. Developing outcome-based performance measures and targets, as required by the NIPP, would also allow TSA to assess to what extent it has achieved security goals and objectives so as to help drive future investment and resource decisions as well as adapt and adjust security efforts as risks change. TSA could also use performance measurement information to help it better identify problems or weaknesses in individual programs and activities as well as the factors causing those problems. Furthermore, establishing a process for determining when additional updates to the Strategy are needed would help to ensure that the Strategy contains the most up-to-date and relevant information for guiding TSA decision making related to airport perimeter and access control security. The 11 commercial airports we contacted have taken a variety of technology- and nontechnology-based approaches since 2009 to strengthen perimeter and access control security, and have encountered challenges related to cost and effectiveness in implementing these approaches. According to airport officials, as well as representatives from industry and specialist organizations, there is no single “best” approach to securing airports against intrusion. Rather, what works best for one airport often may not work for others—each airport is unique in its combination of layout, operations, and the security approaches and methods airports employ, according to these officials. For example, size—both acreage and operations—and available resources vary across airports and play a prominent role in determining the type of security approaches and methods an airport operator employs. Other differentiating factors can include environmental surroundings, individual airport characteristics, previous security events, and airport category. To help them assess these factors and choose the best security approach from the multiple security options available to them, airport operators can, among other things, contract with a private consultant or consult with National Safe Skies Alliance, Inc., who may conduct operational testing of aviation security procedures, technologies, or systems on their behalf. Airport operators we contacted characterized their security approaches and methods as either technology- or nontechnology-based. Below are examples of the approaches these airport operators have taken after 2009 as well as the associated challenges. Technology-based approaches to airport security. Airport operators stated they use a range of technology to varying degrees to enforce airport security. The types of technology airports employ can range from badge readers, to much more costly and sophisticated multi-faceted systems, such as perimeter intrusion detection systems (PIDS). With respect to access control security, all 11 of the airport operators we contacted use badge readers to control access to security-restricted areas and some require a personal identification number to gain access to security-restricted areas. One of the airport operators reported using technology to guard against the use of fraudulent credentials by embedding special technology in workers’ identification badges to verify authenticity. Airport operators also reported using a range of technologies to secure perimeters, from closed-circuit television cameras and fence sensors to PIDS. Generally, larger airports reported testing more pilot technologies and using more sophisticated technology, such as biometric readers (e.g., finger print and hand geometry scanners), PIDS, anti- piggybacking systems, and mobile surveillance towers, among other things. (See fig. 4 for an image of a mobile surveillance tower at a commercial airport.) However, one smaller airport operator we contacted has deployed sophisticated technology to strengthen its perimeter and access controls, and more readily detect unauthorized access to security- restricted areas. Airport officials cited cost and limitations in system effectiveness as challenges to using technology to enhance airport security. According to airport and industry association officials, and representatives from specialist organizations, the cost of installing, maintaining, and upgrading technology can be a significant challenge to implementing even relatively simple technology as well as more sophisticated detection systems. For example, one large airport operator reported spending approximately $40 million in updating its security platform with additional cameras, active shooter alarms, and card readers. Officials from three airports said that they would like to implement biometrics as another layer of access control security, but are concerned about installation, maintenance, and update costs. One airport operator reported spending at least $1 million to install biometric technology at selected access portals, and another spent approximately $3 million to update its credentialing system. Officials also cited limitations in technology effectiveness as another significant challenge—for example, in certain situations perimeter systems may report too many false positives for effective use. Officials also noted that system performance can vary. For example, perimeter technology may not function effectively without modification in certain environmental conditions. Airport and industry officials also noted that the human factor can play a significant role in the effectiveness of many security technologies because the systems require human monitoring to interpret and respond to alarms—if the systems register too many false alarms, personnel may eventually ignore alarms, even potentially valid ones. Nontechnology-based approaches to airport security. Airport and industry officials stressed that technology is not always the best or only option for ensuring airport security, given the individual needs of the airport. Airport officials said they use a variety of nontechnology tools and techniques to secure their perimeters, such as fences, crash barriers, law enforcement patrols, and security buffer zones, among other things (see fig. 5 for fencing used by one airport to secure its perimeter). Four airport operators told us they have law enforcement or contract personnel continuously patrolling their perimeters, while two operators said they maintain three-and ten-foot buffer zones on both sides of their perimeter fence to better detect intruders. Two airport operators with water perimeters said they address the potential threat of boaters breaching their perimeters by implementing security zones ranging from 100 to 300 feet from their perimeters. Airport operators said they also use nontechnology techniques to monitor access to security-restricted areas—for example, aviation workers are required to establish “challenge programs” that train aviation workers to identify potential threats, such as individuals without visible badges. Airport operators also reported conducting random worker screening activities to check aviation workers for prohibited items prior to their entering security-restricted areas. In response to the alleged gun smuggling event by current and former airline workers at the Atlanta Hartsfield-Jackson International Airport (i.e., the insider threat), one airport operator recently implemented full worker screening and another operator is in the process of implementing full worker screening. Airport officials and representatives from a specialist organization cited cost as a significant challenge to using various nontechnology approaches to enforce airport security. For example, officials at one airport estimated that implementing full worker screening will cost approximately $35 million in the first year and $10 million annually thereafter. Recent security events have highlighted the vulnerability of commercial airports to weaknesses in perimeter security and insiders who are intent on using their access privileges to commit criminal and potential terrorist acts. Since 2009, TSA has taken steps to strengthen the security of airport perimeters and access controls through enhanced requirements, oversight, and guidance, and through the development of a risk assessment that focuses on risks to airport security. Ensuring TSA’s Risk Assessment of Airport Security is based on current threat and vulnerability information that reflects the pressing concern of the insider threat, as well as the most recent, known security vulnerabilities, would help TSA ensure that its limited resources are appropriately focused on the highest-priority risks. Moreover, sharing this relevant risk information with airport stakeholders would not only enhance their situational awareness but potentially allow them to make more informed decisions regarding airport security. Establishing a process for determining when additional updates to the Risk Assessment of Airport Security are needed and ensuring they are developed would ensure that TSA’s mechanism for assessing risks to perimeter and access control security appropriately reflects changes in the risk environment. As we reported in 2009, given TSA’s position that the interconnected commercial airport network is only as strong as its weakest asset, determining airport security vulnerability across the network is fundamental to determining the actions and resources that are necessary to reasonably protect it. Assessing the vulnerability of airport security system-wide would help TSA ensure that it has comprehensively assessed risks to commercial airports’ perimeter and access control security. Given budget and resource constraints, it might not be feasible to assess the vulnerability of the nation’s approximately 440 commercial airports individually, but other approaches—such as assessing a sample that reflects a broader representation of airports or providing airports with a self-vulnerability assessment tool—would provide a system-wide perspective on vulnerability while requiring fewer resources. The assessment of relevant data, including event data that may identify system-wide trends in airport security vulnerabilities and potential threats to airport security, is integral to risk-based decision making. Analyzing relevant data would help TSA to identify relevant trends in perimeter and access control security as well as improve the agency’s understanding of risk. In response to our 2009 recommendation, TSA developed a strategy to guide and unify the agency’s efforts to strengthen airport security. However, if the Strategy does not incorporate the most current assessment of airport security-related risk and new activities TSA has taken to facilitate airport security, its value as a decision-making tool may not be fully realized. Updating the Strategy to reflect TSA’s progress in addressing relevant goals and objectives would also help TSA to identify areas that potentially require more attention and a greater share of resources. Perhaps most importantly, the development of outcome-based performance measures and targets would better enable TSA to assess the extent to which its activities have been effective, and allow it to more effectively adapt security efforts as risks evolve. Establishing a process for identifying when updates to the Strategy are needed and ensuring they are developed would ensure that TSA has the most relevant and current information available for airport security to guide its decision making. To help ensure TSA’s actions in overseeing and facilitating airport security are based on the most recent available risk information that assesses vulnerabilities system-wide and evaluates security events, and that these actions are orchestrated according to a strategic plan that reflects the agency’s goals and objectives and its progress in meeting those goals, we recommend that the Administrator of TSA take the following six actions: Update the Risk Assessment of Airport Security to reflect changes to its risk environment, such as those updates reflected in TSSRA and JVA findings, and share results of this risk assessment with stakeholders on an ongoing basis. Establish and implement a process for determining when additional risk assessment updates are needed. Develop and implement a method for conducting a system-wide assessment of airport vulnerability that will provide a more comprehensive understanding of airport perimeter and access control security vulnerabilities. Use security event data for specific analysis of system-wide trends related to perimeter and access control security to better inform risk management decisions. Update the 2012 Strategy for airport security to reflect changes in risk assessments, agency operations, and the status of goals and objectives. Specifically, this update should reflect information from the Risk Assessment of Airport Security, as well as information contained in the most recent TSSRA and JVAs; new airport security-related activities; the status of TSA efforts to address goals and objectives; and finalized outcome-based performance measures and performance levels—or targets—for each relevant activity and strategic goal. Establish and implement a process for determining when additional updates to the Strategy are needed. We provided a draft of this report to DHS for their review and comment. DHS provided written comments, which are noted below and reproduced in full in appendix IV; these comments include information regarding TSA’s planned actions that was not included in the prior sensitive report. TSA also provided technical comments, which we incorporated as appropriate. DHS concurred with all six recommendations in the report and described actions underway or planned to address them. With regard to the first recommendation that TSA update the Risk Assessment of Airport Security, DHS concurred and stated that in March 2016 the agency established a National Strategy for Airport Perimeter Access Control Working Group (NSAPAC-WG), comprised of various TSA offices, to begin updating the Risk Assessment of Airport Security. This update is to include new data from various TSA programs and assessments, including the TSSRA and JVAs, with the goal of sharing nationwide best practices for mitigating airport perimeter and access control security vulnerabilities with airport operators. TSA expects to complete the update by April 30, 2017. This action, if implemented effectively, should address the intent of our recommendation. With regard to the second recommendation to establish and implement a process for determining when additional risk assessment updates are needed, DHS concurred and stated that TSA plans to initiate updates to the Risk Assessment of Airport Security once every 3 years. TSA stated that this timeframe is needed to allow for an extended schedule of JVAs and other source material and for analysis of mature data to identify consistencies and changes and provide that analysis to airport operators. DHS reported that the NSAPAC-WG will be re-established every 3 years to lead these updates, which are to include a review of all newly collected data, assessments of policies implemented since the last risk assessment, and consideration of possible changes to the assessment. The NSAPAC-WG is to complete its review and revision of the assessment within 2 years of the start of the update. This action, if implemented effectively, could address the intent of our recommendation. However, it is not clear to what extent this process would address changing conditions outside the cycle that could require an immediate update or reexamination of risk. We will continue to monitor TSA’s efforts. With regard to the third recommendation that TSA develop and implement a method for conducting a system-wide assessment of airport vulnerability, DHS concurred and stated that TSA has begun to take steps to develop methods that will provide a more comprehensive understanding of airport security vulnerabilities. Specifically, TSA has asked airport operators to complete a vulnerability assessment checklist that focuses on perimeter and access control security, including the insider threat, and plans to direct its leadership in the field to work with airport operators to review the assessment results and develop and implement risk mitigation plans. In January 2016, TSA also began to implement the Centralized Security Vulnerability Management Process, an agency-wide process for identifying, addressing, and monitoring systemic security vulnerabilities. Additionally, TSA has organized the Compliance Risk Integrated Project Team, composed of various TSA offices, which focuses on identifying and addressing areas of greatest risk across all TSA-regulated parties, including airport operators. This program is to combine data from ongoing regulatory compliance processes—e.g., annual and targeted airport inspections, special emphasis assessments, inspector outreach, and response activities— JVAs, a new Compliance Vulnerability Assessment program, and risk data to derive a Compliance Risk level. The resulting Compliance Risk level is to drive national, regional, and airport/facility deployment of TSA resources to address those areas identified as highest risk. DHS reported that the new Compliance Vulnerability Assessment component of this program is to draw on data from multiple sources, including JVAs, surface transportation baseline assessments, and cargo vulnerability assessments. According to DHS, as of May 2016, TSA had reviewed the security vulnerability assessments performed by airports in accordance with the vulnerability assessment checklist TSA earlier provided to airports, and is sharing the results of that review with airports and other appropriate stakeholders to support the development of risk mitigation plans. TSA plans to complete the entire compliance risk effort by September 30, 2018. This action, if implemented effectively, could address the intent of our recommendation but without examination of the documentation and underlying analysis it is too early to know. We will continue to monitor TSA’s efforts. With regard to the fourth recommendation to use security event data for specific analysis of system-wide trends related to perimeter and access control security, DHS concurred and stated that TSA held meetings in April 2016 to examine the analytic capabilities of SIRT to provide system- wide trends related to perimeter and access control security and consider the best use of this information to inform risk-based management decisions. According to DHS, TSA has identified specific SIRT data fields and designed analytical reports that are to be completed by July 31, 2016, and plans to use the results of the analysis to inform risk management decisions in fiscal year 2017. This action, if implemented effectively, should address the intent of our recommendation. With regard to the fifth recommendation that TSA update its 2012 strategy for airport perimeter and access control security, DHS concurred and stated that TSA began updating the 2012 Strategy in January 2016 and in March 2016 turned the effort over to the newly created NSAPAC-WG. According to TSA, as of May 2016, the NSAPAC-WG has reviewed and compared the 2012 Strategy to the agency’s operating environment, canvased subject matter experts to determine goals and objectives, and begun rewriting the strategy; TSA plans to complete the update by December 31, 2017. However, because the update will depend on finished analysis from the Risk Analysis for Airport Security, which must be received before the update to the Strategy can be completed, TSA plans to release an interim update to the Strategy by June 30, 2016. TSA also reported that it has released an Information Circular that encourages airport operators to conduct airport vulnerability assessments that focus on the insider threat and to use the results of the assessments to implement mitigation measures. TSA also plans to use the final update of the Strategy to introduce new and emerging threats and vulnerabilities that can impact perimeter and access control security, such as unmanned aerial systems and cyber security issues. This action, if implemented effectively, should address the intent of our recommendation. With regard to the sixth recommendation to establish and implement a process for determining when additional updates to the Strategy are needed, DHS concurred and stated that TSA will implement a process similar to that for the Risk Assessment of Airport Security, in which the NSAPAC-WG will initiate updates to the Strategy once every 3 years. According to DHS, the updates are to include a review of all newly collected data, assessments of policies implemented since the last risk assessment, and consideration of possible changes to the strategy. The NSAPAC-WG is to complete its review and revision of the Strategy within 2 years of the start of the update. This action, if implemented effectively, could address the intent of our recommendation. However, it is not clear to what extent this process would address changing conditions outside the cycle that could require reconsideration of the Strategy. We will continue to monitor TSA’s efforts. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Attorney General of the United States, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 7141 or groverj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix V. This report addresses the Transportation Security Administration’s (TSA) oversight of airport perimeter and access control security. More specifically, our objectives were to examine (1) the extent to which TSA has assessed the components of risk—threat, vulnerability, and consequence—related to commercial airport perimeter and access control security since 2009; (2) the extent to which TSA has taken actions since 2009 to oversee and facilitate airport perimeter and access control security; and (3) the actions selected commercial airports have taken, if any, to strengthen perimeter and access control security since 2009. For this report, we analyzed TSA’s data on general airport security events from the Performance and Results Information System (PARIS)—TSA’s system of record for regulatory activities and security events—from fiscal years 2009 (October 2008) through 2015 (September 2015). We selected these timeframes to align with our 2009 report on airport perimeter and access control security and the last full fiscal year of data available at the time of our review. TSA uses PARIS for maintaining information associated with TSA’s regulatory investigations, security events, and enforcement actions across transportation modes, as well as for recording the details of security events involving passenger and property screening. Because TSA changed the security event reporting categories in PARIS and their definitions in October 2012, we analyzed data from fiscal years 2009 through 2012 separately from fiscal years 2013 through 2015. We selected event categories in PARIS that we determined were most likely to contain events related to perimeter and access control security based on TSA’s definitions. We further refined the data by removing those events that TSA identified as having occurred at an operational passenger screening checkpoint, which is specifically excluded from TSA’s definition of perimeter and access control security. See table 3 for the categories and definitions we selected. These data consisted of the date on which the event occurred, the airport in which it occurred, the event category type as listed in table 3, and details of the event in narrative form, among other things. We assessed the reliability of the event data by (1) interviewing agency officials about the data sources, the system’s controls, and any quality assurance steps performed by officials before data were provided and (2) testing the data for missing data, duplicates, airports not regulated by TSA, values beyond expected ranges, or entries that otherwise appeared to be unusual. We identified a limitation in that data contain events that are not directly related to perimeter and access control security. For example, the “access control–contained security incident” category may include an event in which a police officer patrolling the terminal area observed a contractor’s unattended tools that may contain items prohibited in the sterile area. Further, other event categories that we did not include in our analysis may contain events that relate to perimeter and access control security. For example, we did not include event categories in our analysis, such as “disruptive individual,” “loss or theft of airport SIDA badge or access media,” or “suspicious individual or activity,” which may have included events related to perimeter and access controls security. We did not analyze the data to screen out unrelated events because that would require an extensive and resource-intensive content analysis of the event narratives to refine the records to include only those events that were specific to perimeter and access control security, and the narratives may not be sufficient to make an appropriate judgment. Therefore, the event data that we report may over- or under-represent the total number of events directly related to perimeter and access control security. However, with this caveat, we found the PARIS events data sufficiently reliable to provide descriptive information on the number of events potentially related to perimeter and access control security over fiscal years 2009 through 2015 and by airport category. To determine the extent to which TSA has assessed the components of risk—threat, vulnerability, and consequence—related to commercial airport perimeter and access control security since 2009, we analyzed documentation and data for TSA’s risk assessment activities and interviewed TSA officials responsible for conducting these assessment activities. Specifically, we examined the extent to which TSA generally conducted activities intended to assess threat, vulnerability, and consequence at the nation’s approximately 440 commercial airports. For all three elements of risk, we reviewed TSA’s 2013 Comprehensive Risk Assessment for Perimeter and Access Control Security and TSA’s 2013 through 2015 Transportation Sector Security Risk Assessments (TSSRA)—TSA’s annual report to Congress on transportation security that establishes risk scores for various attack scenarios within the sector, including domestic aviation. Specifically for vulnerability, in addition to the TSSRAs, we reviewed TSA’s use of joint vulnerability assessments (JVA) that TSA conducts with support from the Federal Bureau of Investigation (FBI) at certain airports identified as high risk every 3 years in addition to other airports at their discretion. We analyzed the number and location of JVAs that TSA conducted from fiscal years 2009 through 2015 to report on the extent to which TSA has conducted a system-wide assessment of vulnerability. We selected these timeframes to align with our 2009 report on airport perimeter and access control security and the last full fiscal year of data available at the time of our report. We interviewed TSA officials responsible for risk management activities, including risk assessments, to clarify the extent to which TSA has assessed risk, and its components of threat, vulnerability, and consequence, in relation to airport perimeter and access control security. These agency officials included representatives from the following TSA headquarters offices: Office of Law Enforcement/Federal Air Marshals Service (FAMS), Office of Inspections, Office of Intelligence and Analysis, Office of Security Capabilities, Office of Security Operations, Office of Security Policy and Industry Engagement, and Office of the Chief Risk Officer. We also interviewed officials from the FBI to discuss their role in assessing threat and vulnerability through the JVA process. We compared information collected through our review of documentation and interviews with agency officials with recommendations on risk assessment and management practices found in the Department of Homeland Security’s (DHS) National Infrastructure Protection Plan (NIPP) as well as federal standards for internal controls and our past reports on airport perimeter and access control security. To determine the extent to which TSA has taken actions since 2009 to oversee and facilitate airport perimeter and access control security, we asked TSA officials to identify agency-led efforts and activities that directly or indirectly impact airport security. For the purposes of this report, we categorized TSA’s responses into five main areas of effort: (1) risk planning and assessment, (2) worker security programs, (3) airport security planning and assessment tools, (4) airport guidance and reference materials, and (5) general airport security. To identify the full scope of TSA’s oversight of airport security efforts, we interviewed agency officials to identify agency-led efforts and activities that were initiated prior to 2009 and ongoing at the time of our review. Additionally, we interviewed TSA officials responsible for various airport security activities regarding program operations. We also interviewed TSA field officials, airport operator officials, and industry association officials, as described below, regarding selected TSA airport security activities. Further, we interviewed FBI officials regarding the agency’s Air Domain Computer Information Comparison program and reviewed relevant documentation. To evaluate TSA efforts with respect to aviation worker security, we reviewed relevant program information for Playbook, This is My Airport, and the FBI Rap Back Service program. Additionally, we assessed the extent to which TSA’s 2012 National Strategy for Perimeter and Access Control Security met NIPP risk management criteria; we also considered the GPRA Modernization Act of 2010 (GPRAMA) requirements, and generally accepted strategic planning practices for government agencies. To assess the extent to which the most recent version of the Strategy has been updated, we compared, among other things, the goals and objectives of the Strategy with activities TSA has initiated since 2009. This included analyzing risk management assessments and relevant program documentation, including budget and performance information. We also interviewed relevant TSA headquarters officials regarding the extent to which the Strategy has been informed by ongoing perimeter and access control security efforts. To describe the actions selected commercial airports have taken, if any, to strengthen airport perimeter and access control security since 2009, we conducted site visits and telephone interviews with airport officials and onsite TSA federal security directors (FSD) or their representatives at selected airports as well as interviewed industry officials. We conducted site visits at six commercial airports in the United States—Baltimore- Washington International Thurgood Marshall Airport, Chattanooga Metropolitan Airport, Hartsfield-Jackson Atlanta International Airport, Merced Municipal Airport, Monterey Regional Airport, and Norman Y. Mineta San Jose International Airport. During these visits, we observed airport security operations that included various technology- and nontechnology-based approaches intended to strengthen airport security, toured the airports’ perimeters, and discussed issues related to perimeter and access control security with onsite FSDs or their representatives, and airport officials. We also conducted telephone interviews with onsite TSA and airport officials from five commercial airports in the United States— Charleston County International Airport and Air Force Base, Dallas-Ft. Worth International Airport, John F. Kennedy International Airport, Logan International Airport, and Miami International Airport. During these interviews, we discussed with officials airport security operations that included airports’ approaches intended to strengthen security, unique physical characteristics of the airports, and issues related to perimeter and access control security. We selected these airports for site visits and their officials for telephone interviews based on a variety of factors, including a range in the airport category, public interest as shown through media reports of previous events related to security, unique security characteristics or challenges (such as a water perimeter), and new technology or initiatives implemented by airports related to perimeter and access control security. Because we did not select a generalizable sample of airports, the results of these site visits and interviews cannot be projected to all of the approximately 440 commercial airports in the United States. However, the site visits and interviews provided us with onsite TSA and airport officials’ perspectives on actions taken intended to strengthen airport perimeter and access control security, including various approaches using both technology- and nontechnology-based methods. Further, we interviewed officials from the American Association of Airport Executives (AAAE), Airports Council International-North America (ACI- NA), the National Safe Skies Alliance, and RTCA, Inc.’s, Special Committee on Airport Security Access Control Systems. We selected these two industry associations and two specialist non-profit organizations based on input from TSA officials and airport officials, and because of these associations’ and organizations’ specialized knowledge and experience with airport security operations. These interviews provided us with additional perspectives on airport security. We conducted this performance audit from February 2015 to May 2016, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Pursuant to the Aviation and Transportation Security Act (ATSA), as amended, the Transportation Security Administration (TSA) is the federal agency with primary responsibility for ensuring the security of the nation’s civil aviation system. Federal regulations governing civil aviation security are primarily codified at Parts 1540 through 1562 of Title 49 of the Code of Federal Regulations (C.F.R.), through which TSA imposes or otherwise enforces security measures and other requirements carried out by airport operators, air carriers, and other civil aviation stakeholders (see tables 4 through 8) as components of the agency’s layered approach to security. Airport operators implement security measures relating to perimeter security and access controls primarily in accordance with their respective security programs and any applicable regulations, security directives (SD), or amendments to such security programs, against which TSA also assesses airport operator compliance. This appendix highlights and describes requirements relating to perimeter security and access controls and for which airport operators have primary responsibilities; it does not, however, include all relevant provisions and requirements. The Transportation Security Administration (TSA) has numerous ongoing activities that were initiated prior to 2009, which either directly or indirectly regulate, strengthen, or facilitate commercial airport perimeter and access control security. A list of these ongoing efforts—as identified by TSA officials—is presented in Table 9. TSA officials cited agency policy recommendations and requirements—such as security directives—and compliance inspections as playing a particularly important role in regulating and facilitating perimeter and access control security at commercial airports, as well as the following general transportation security program that addresses airport perimeter and access control: Visible Intermodal Prevention and Response (VIPR) program. According to TSA officials, the agency implemented the VIPR program in 2005 to protect the nation’s transportation systems through targeted deployment of integrated TSA assets. VIPR teams utilize screening and law enforcement capabilities in coordinated activities to randomly and unpredictably augment security across all modes of transportation, including the aviation sector. VIPR teams are composed of TSA officials—including Federal Air Marshals, transportation security inspectors, behavior detection officers, and explosives specialists—and local law enforcement and airport officials. These teams provide law enforcement and screening capabilities, including randomly screening aviation workers, property, and vehicles, as well as providing a visible presence at access points and the security-restricted areas of airports. According to TSA, during fiscal year 2015, TSA’s 31 VIPR teams conducted approximately 7,250 operations nationwide in the aviation environment. In response to the November 2013 shooting at the Los Angeles International Airport, in which a TSA screener was killed, TSA redeployed VIPR teams to the aviation sector, to establish a baseline 60/40 split of VIPR resources between the aviation and surface transportation sectors. TSA officials stated that, as of December 2015, the agency had maintained this increased VIPR presence at commercial airports. Appendix IV: Comments from the Department of Homeland Security (DHS) In addition to the contact named above, Christopher E. Ferencik (Assistant Director), Barbara A. Guffy (Analyst-in-Charge), Ana Ivelisse Aviles, Chuck Bausell, Katherine M. Davis, Michele C. Fejfar, Eric D. Hauswirth, Susan Hsu, Thomas F. Lombardi, Elizabeth D. Luke, Ruben Montes de Oca, Faye R. Morrison, Heidi J. Nielson, and Maria C. Staunton made key contributions to this report.
Incidents of aviation workers using access privileges to smuggle weapons and drugs into security-restricted areas and onto planes has heightened awareness about security at commercial airports. TSA, along with airport operators, has responsibility for securing the nation's approximately 440 commercial airports. GAO was asked to review TSA's oversight of airport perimeter and access control security since GAO last reported on the topic in 2009. This report examines, for airport security, (1) the extent to which TSA has assessed the components of risk and (2) the extent to which TSA has taken actions to oversee and facilitate security, among other objectives. GAO examined TSA documents related to risk assessment and security activities; analyzed relevant TSA security event data from fiscal years 2009 through 2015; obtained information from TSA and industry association officials as well as from a nongeneralizable sample of 11 airports, selected based on factors such as size. The Department of Homeland Security's (DHS) Transportation Security Administration (TSA) has made progress in assessing the threat, vulnerability, and consequence components of risk to airport perimeter and access control security (airport security) since GAO last reported on the topic in 2009, such as developing its Comprehensive Risk Assessment of Perimeter and Access Control Security (Risk Assessment of Airport Security) in May 2013. However, TSA has not updated this assessment to reflect changes in the airport security risk environment, such as TSA's subsequent determination of risk from the insider threat—the potential of rogue aviation workers exploiting their credentials, access, and knowledge of security procedures throughout the airport for personal gain or to inflict damage. Updating the Risk Assessment of Airport Security with information that reflects this current threat, among other things, would better ensure that TSA bases its risk management decisions on current information and focuses its limited resources on the highest-priority risks to airport security. Further, TSA has not comprehensively assessed the vulnerability—one of the three components of risk—of TSA-regulated (i.e., commercial) airports system-wide through its joint vulnerability assessment (JVA) process, which it conducts with the Federal Bureau of Investigation (FBI), or another process. From fiscal years 2009 through 2015, TSA conducted JVAs at 81 (about 19 percent) of the 437 commercial airports nationwide. TSA officials stated that they have not conducted JVAs at all airports system-wide because of resource constraints. While conducting JVAs at all commercial airports may not be feasible given budget and resource constraints, other approaches, such as providing all commercial airports with a self-vulnerability assessment tool, may allow TSA to assess vulnerability at airports system-wide. Since 2009, TSA has taken various actions to oversee and facilitate airport security; however, it has not updated its national strategy for airport security to reflect changes in its Risk Assessment of Airport Security and other security-related actions. TSA has taken various steps to oversee and facilitate airport security by, among other things, developing strategic goals and evaluating risks. For example, in 2012 TSA developed its National Strategy for Airport Perimeter and Access Control Security (Strategy), which defines how TSA seeks to secure the perimeters and security-restricted areas of the nation's commercial airports. However, TSA has not updated its Strategy to reflect actions it has subsequently taken, including results of the 2013 Risk Assessment and new and enhanced security activities, among other things. Updating the Strategy to reflect changes in the airport security risk environment and new and enhanced activities TSA has taken to facilitate airport security would help TSA to better inform management decisions and focus resources on the highest-priority risks, consistent with its strategic goals. This is a public version of a sensitive report that GAO issued in March 2016. Information that TSA deems “Sensitive Security Information” has been removed. GAO is making six recommendations, including that TSA update its Risk Assessment of Airport Security, develop and implement a method for conducting a system-wide assessment of airport vulnerability, and update its National Strategy for Airport Perimeter and Access Control Security . DHS concurred with the recommendations and identified planned actions to address the recommendations.
USPS’s financial condition and outlook continue to be challenging despite recent congressional action that relieved USPS of $4 billion in mandated payments to prefund postal retiree health benefits by September 30, 2009. Preliminary results from the end of fiscal year 2009 and USPS’s outlook include: In fiscal year 2009, mail volume declined about 28 billion pieces, or about 14 percent, from the prior fiscal year, when volume was about 203 billion pieces; revenue declined from about $75 billion to about $68 billion. A looming cash shortfall necessitated last-minute congressional action to reduce USPS’s mandated payments to prefund retiree health benefits from $5.4 billion to $1.4 billion. In the absence of this congressional action, USPS was on track to lose about $7 billion. USPS and its auditors are currently considering whether the $4 billion in relief will be booked in fiscal year 2009 or fiscal year 2010. Regardless of the outcome, USPS will have a large net loss for the third consecutive fiscal year and one of its largest losses in decades (see fig. 1). USPS debt at the end of fiscal year 2009 increased by the annual statutory limit of $3 billion, bringing outstanding debt to $10.2 billion. If debt continues to increase by $3 billion annually, USPS will reach its total statutory debt limit of $15 billion in fiscal year 2011. Looking forward, USPS has projected annual deficits exceeding $7 billion in fiscal years 2010 and 2011, and continuing large cash shortfalls. As we previously reported, USPS’s cost-cutting efforts and rate increases have not fully offset the impact of huge declines in mail volume (a decline of about 28 billion pieces in fiscal year 2009) and other factors—notably semi-annual cost-of-living allowances (COLA) for employees covered by union contracts. Compensation and benefits constitute close to 80 percent of USPS costs—a percentage that has remained similar over the years despite major advances in technology and automating postal operations. These costs declined by 1.3 percent in the first 11 months of fiscal year 2009 (the most recent data available) as compared to the same time period in fiscal year 2008, in contrast to other costs such as transportation, supplies and services, and depreciation, which together declined 8.2 percent. Over this same period, total revenue declined by 8.6 percent, including declines of 9.1 percent for market-dominant products and about 4.0 percent for competitive products. (See app. I for a summary of market- dominant and competitive products.) About 88 percent of USPS revenue was generated from market-dominant products and services, with competitive products and services generating about 12 percent of rcent of revenues (see fig. 2). revenues (see fig. 2). PAEA and implementing Postal Regulatory Commission (PRC) regulations provided USPS with greater flexibility to set prices, test new postal products, and retain earnings so that it can finance needed capital investments and repay its debt. PAEA abolished the former ratemaking structure that involved a lengthy, costly, and litigious process. Under the new structure, USPS has broad latitude to announce rate changes that are implemented in a streamlined process unless PRC determines these rates would violate legal requirements. Key requirements and flexibilities provided in the law include: A price cap based on the Consumer Price Index generally applies to market-dominant classes of mail, such as First-Class Mail and Standard Mail. This means that in general, USPS has the flexibility to increase some individual rates either above or below the rate of inflation as long as the average rate increase for each class of mail does not exceed the cap. USPS can request that PRC approve a rate increase that exceeds the price cap on the basis of extraordinary or unexpected circumstances (postal stakeholders refer to this as an “exigent” rate increase). PRC must determine whether such an increase would be reasonable, equitable, and necessary “to maintain and continue developing postal services of the kind and quality adapted to the needs of the United States.” Worksharing discounts for market-dominant products are generally limited to the costs avoided by USPS as a result of specified mailer activities. Each competitive product must generate sufficient revenues to cover its costs. In addition, competitive products must collectively cover what PRC determines to be an appropriate share of USPS’s overhead costs. PRC has determined this share to be 5.5 percent of USPS’s overhead costs. Within these constraints, USPS was given broad pricing flexibility for its competitive products, which are not subject to a price cap. USPS can also establish volume discounts for competitive products as well as enter into contract rates with individual mailers. PAEA generally restricted USPS to offering postal products and services by prohibiting it from initiating new nonpostal products and services. USPS was required to discontinue existing nonpostal products—such as passport photo services and photocopying services—except for those that PRC determined should be continued. Subsequently, PRC determined that most existing USPS nonpostal products should be continued. In the short time since PAEA was enacted, with the exception of annual rate increases, revenue-generation actions have generally achieved limited results compared to USPS’s deficits. We commend USPS for taking action to use its pricing flexibility to address the pressing need for additional revenue. Although these actions generated some revenues, their positive impacts were overwhelmed by the recession—with its cutbacks in consumer spending and corporate advertising—and ongoing diversion of mail to electronic alternatives. Further, the potential of some actions was limited because they applied to types of mail that generate only a small fraction of USPS revenues. Other actions, such as targeted sales for some types of mail, were implemented this year with little advance notice, which may have limited mailer response. Key USPS revenue-generation actions since PAEA was enacted are summarized below. Rate Increases for Market-Dominant Mail: Under the ratemaking system established by PAEA, USPS annually increased rates in 2008 and 2009 for market-dominant classes of mail at virtually the maximum allowable amount under the price cap. To put this into context, historically, rate increases have been a key action that USPS has taken to remain financially viable. Volume-Based Incentives for Specific Types of Market-Dominant Mail: USPS has recently implemented three targeted rate incentives to stimulate additional mail volume and take advantage of its excess operational capacity. First, a 2009 “summer sale” for Standard Mail offered lower rates for volumes that exceeded specific thresholds, with the goal of increasing mail volume during a typically slow period. Second, an ongoing “fall sale” for First-Class Mail aimed at commercial mailers is providing lower rates for volume over specific thresholds. Third, an ongoing Saturation Mail incentive program also is providing lower rates for volume over specific thresholds. Negotiated Service Agreements (NSA) for Market-Dominant Products: According to USPS data, its seven NSAs for market-dominant products collectively did not generate any net revenue in fiscal years 2007 and 2008 combined. These NSAs generally offered mailers lower rates for volumes that exceeded specific thresholds. Mailers also agreed to actions to reduce some USPS costs, such as the substitution of electronic notices in lieu of USPS returning undeliverable advertising mail. Rate Changes and Contract Rates for Competitive Products: Under the ratemaking system established by PAEA, USPS annually increased rates in 2008 and 2009 for competitive products such as Priority Mail and Express Mail. USPS also made product and pricing changes to enhance their competitiveness, such as a new small flat-rate box for Priority Mail and the introduction of zone-based rates for Express Mail. USPS has introduced volume discounts for Express Mail, Priority Mail, and bulk Parcel Post. USPS has also introduced lower rates for electronic postage used for some competitive products such as Express Mail and Priority Mail. In addition, USPS has entered into close to 90 contracts with mailers of competitive products that included Priority Mail, Express Mail, bulk Parcel Post, Parcel Return Service, and various types of bulk international mail. These contracts are generally volume based and have provisions intended to lower USPS’s mail handling costs. USPS does not publicly report results for its individual contracts because it considers this information to be proprietary. Looking forward, USPS has opportunities to continue pursuing the flexibilities provided by PAEA to help generate additional revenue from postal products and services. For example, USPS is continuing to pursue its “Click-N-Ship” initiative that allows customers to print out mailing labels with postage, as well as flexible pricing for Express Mail, Priority Mail, and bulk Parcel Post. USPS is also promoting voting by mail to stimulate additional First-Class Mail volume. However, results from USPS revenue-generation efforts will continue to be constrained by the economic climate and by changing use of the mail. USPS has asked Congress to change the restrictions established by PAEA so that it could offer new nonpostal products and services such as banking and insurance. However, USPS has not presented a business plan which details what markets it might enter, its prospects for profitability, and what specific legislative changes would be needed. Allowing USPS to compete more broadly with the private sector would raise risks and concerns. As with USPS’s nonpostal ventures before PAEA was enacted, new nonpostal ventures could lose money; and even if they were to make money, issues related to unfair competition would need to be considered. On the other hand, increasing postal rates may provide a short-term revenue boost but would risk depressing mail volume and revenues in the long term, in part by accelerating diversion of payments, communications, and advertising to electronic alternatives. Recognizing this, the Postmaster General recently announced that there will not be an “exigent” price increase in 2010 for market-dominant products such as First-Class Mail and Standard Mail. He explained: “While increasing prices might have generated revenue for the Postal Service in the short term, the long-term effect could drive additional mail out of the system.” Similarly, increasing rates for competitive products such as Express Mail and Priority Mail may provide a short-term revenue boost but risk long-term losses in mail volume, revenues, and USPS competitiveness. Further, the short-term impact of increasing competitive rates would likely be limited because competitive products and services generate about 12 percent of USPS revenue. USPS has not announced whether it will increase rates for competitive products in 2010. Whether USPS should be allowed to engage in nonpostal activities should be carefully considered, including its poor past performance in this area, as should the risks and fair competition issues. We have previously reported that: USPS lost nearly $85 million in fiscal years 1995, 1996, and 1997 on 19 new products, including electronic commerce services, electronic money transfers, and a remittance processing business, among others. In 2001, we reported that none of USPS’s electronic commerce initiatives were profitable and that USPS’s management of these initiatives—such as an electronic bill payment service that was eventually discontinued—was fragmented, with inconsistent implementation and incomplete financial information. We testified during the debate on postal reform on some longstanding questions about whether USPS should enter into nonpostal initiatives and the appropriate role of a federal entity competing with private firms, particularly since USPS has a statutory monopoly on letter mail and other disparities in legal status vis-à-vis its potential competitors, such as exemptions from taxes. Questions include: Should USPS be allowed to compete in areas where there are already private-sector providers, and if so, on what terms? What laws should be applied equally to USPS and its competitors, such as anti-trust and consumer protection laws? What transparency and accountability mechanisms would be needed for any new nonpostal products and services to prevent unfair competition and inappropriate cross-subsidization from postal products and services? Should USPS be subject to the same regulatory entities and regulations as its competitors if it could compete in banking, insurance, and retail services? Would the PRC have an oversight role for any new nonpostal activities? If USPS used existing retail presence of 37,000 facilities to offer new nonpostal products and services—such as leasing or subleasing excess capacity in its facilities—would this be an unfair competitive advantage? How would USPS finance its nonpostal activities, considering its difficult financial condition? Would USPS be allowed to borrow at Treasury rates more favorable than those available to other businesses? In conclusion, when we recently added USPS’s financial condition to our high-risk list, we stated that USPS urgently needs to restructure to achieve short-term and long-term financial viability. USPS has not been able to cut costs fast enough or generate sufficient revenue to offset the accelerated decline in mail volume and revenue. USPS restructuring will require aligning its costs with revenues, generating sufficient earnings to finance capital investment, and managing its debt. Although USPS has taken some action to use its pricing and product flexibility under PAEA, results to date have been limited and will be constrained by the economic climate and changing use of the mail. Mail volume has typically returned after past recessions, but much of the recent volume decline may not return. Nevertheless, USPS has opportunities to generate new revenues from postal products and services that appear more promising than venturing into new risky nonpostal areas, while also making significant reductions in its workforce and network costs. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have. For further information regarding this statement, please contact Phillip Herr at (202) 512-2834 or herrp@gao.gov. Individuals who made key contributions to this statement include Shirley Abel, Teresa Anderson, Gerald P. Barnes, Colin Fallon, Kenneth E. John, Hannah Laufe, Daniel Paepke, and Crystal Wesco. Domestic and international single-piece mail (e.g., bill payments and letters) and domestic bulk mail (e.g., bills and advertising) Mainly bulk advertising and direct mail solicitations Mainly magazines and local newspapers Single-piece Parcel Post (e.g., packages and thick envelopes with gifts and merchandise) Media Mail (e.g., books, CDs, and DVDs) Library mail (e.g., items on loan from or mailed between academic institutions, public libraries, and museums) Bound printed matter (e.g., permanently-bound sheets of advertising, or directories such as catalogs and phone books) A variety of services, such as Delivery receipt services (e.g., Delivery Confirmation, Signature Confirmation) Certified Mail and Registered Mail Address list services (e.g., services to update and correct business mailing lists) Caller service (business mail pickup at a USPS facility) Guaranteed overnight delivery to most locations for time-sensitive letters, documents or merchandise 2-3 day service to most domestic locations that is often used to expedite delivery Bulk Parcel Post parcel mailings entered at USPS facilities that are generally close to the destination of the mail Expedited delivery of items to foreign countries, with guaranteed date-certain service to some locations Delivery of items to foreign countries that generally has faster service standards than International First-Class Mail Bulk mailings sent to other countries (e.g., bills, statements, advertising, and magazines) Business retrieval of returned parcels from USPS facilities A variety of services, such as Premium Forwarding Service (reshipping mail from a primary residential address and some P.O boxes to a temporary address using Priority Mail) International delivery receipt services, such as Registered Mail, return receipt, and restricted delivery This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.S. Postal Service's (USPS) financial condition and outlook deteriorated significantly during fiscal year 2009. USPS was not able to cut costs fast enough to offset declining mail volume and revenues resulting from the economic downturn and changing mail use. Facing an unprecedented cash shortfall, USPS stated that it would have insufficient cash on hand to make its mandated $5.4 billion payment to prefund postal retiree health benefits that was due by the end of the fiscal year. In July, 2009, GAO added USPS's financial condition to the list of high-risk areas needing attention by Congress and the executive branch to achieve broad-based transformation. GAO stated that USPS urgently needs to restructure to address its current and long-term financial viability. GAO also stated that USPS needs to use its flexibility to generate revenue through new or enhanced products. This testimony will (1) update USPS's financial condition and outlook, (2) describe changes made by the Postal Accountability and Enhancement Act (PAEA) of 2006 that provided USPS with greater flexibility to generate revenues, (3) outline USPS's revenue-generation actions and results using this flexibility, and (4) discuss options for USPS to generate increased revenues in the future. This testimony is based on GAO's past and ongoing work. USPS's financial condition for fiscal year 2009 and its financial outlook continue to be challenging: (1) In fiscal year 2009, mail volume declined about 28 billion pieces, or about 14 percent, from the prior fiscal year, when volume was about 203 billion pieces; revenue declined from about $75 billion to about $68 billion. (2) A looming cash shortfall necessitated last-minute congressional action to reduce USPS's mandated payments to prefund retiree health benefits by $4 billion. In the absence of congressional action, USPS was on track to lose about $7 billion. (3) USPS debt increased at the end of fiscal year 2009 by the annual statutory limit of $3 billion, bringing outstanding debt to $10.2 billion. At this rate, USPS will reach its total $15 billion statutory debt limit in fiscal year 2011. (4) USPS projects annual deficits over $7 billion in fiscal years 2010 and 2011, and continuing large cash shortfalls. PAEA and implementing regulations gave USPS more flexibility to set prices, test new postal products, and retain earnings. USPS has broad latitude to set rates that take effect unless the Postal Regulatory Commission finds the rates would violate legal requirements, such as a price cap that generally limits rate increases for most mail to the rate of inflation. Except for annual rate increases, USPS revenue-generation actions since PAEA was enacted have generally achieved limited results compared to USPS's deficits. To its credit, USPS has taken actions to use its pricing flexibility to address the pressing need for additional revenue. These actions generated some revenues, although their positive impacts were overwhelmed by the recession--with its cutbacks in consumer spending and corporate advertising--and ongoing diversion of mail to electronic alternatives. Looking forward, USPS has opportunities to continue pursuing the flexibilities provided by PAEA to help generate additional revenue from postal products and services. However, results will continue to be constrained by the economic climate and by changing use of the mail. Mail volume has typically returned after past recessions, but much of the recent volume decline may not return. Increasing postal rates may provide a short-term revenue boost but would risk depressing mail volume and revenues in the long-term, in part by accelerating diversion of mail to electronic alternatives. USPS has asked Congress to change the restrictions established by PAEA so that it could offer new nonpostal products and services such as banking and insurance. Allowing USPS to compete more broadly with the private sector could lose money, and fair competition issues would need to be considered. Thus, in addition to its revenue-generation initiatives, USPS will need to continue making significant reductions in its workforce and network costs. When we recently added USPS's financial condition to our high-risk list, we said that restructuring will require USPS to align its costs with revenues, generate sufficient earnings to finance capital investment, and manage its debt.
The individual military services and a wide array of DOD and non-DOD agencies award contracts to support contingency operations. Within a service or agency, numerous contracting officers, with varying degrees of knowledge about how contractors and the military operate in deployed locations, can award contracts that support contingency operations. According to DOD estimates, in 2005 several hundred contractor firms provided U.S. forces with a range of services at deployed locations. The customer (e.g., a military unit) for these contractor-provided services is responsible for identifying and validating requirements to be addressed by the contractor as well as evaluating the contractor’s performance and ensuring that contractor-provided services are used in an economical and efficient manner. In addition, DOD has established specific policies on how contracts, including those that support contingency operations, should be administered and managed. Oversight of contracts—which can refer to contract administration functions, quality assurance surveillance, corrective action, property administration, and past performance evaluation—ultimately rests with the contracting officer, who has the responsibility for ensuring that contractors meet the requirements as set forth in the contract. However, as some contracting officers are not located at the deployed location, the contracting officers appoint contract oversight personnel who represent the contracting officer at the deployed location and are responsible for monitoring contractor performance. The way contracts and contractors are monitored at a deployed location is largely a function of the size and scope of the contract. Some contracting officers have opted to have personnel from the Defense Contract Management Agency monitor a contractor’s performance and management systems to ensure that the cost, product performance, and delivery schedules comply with the terms and conditions of the contract. Defense Contract Management Agency officials delegate daily oversight responsibilities to individuals drawn from units receiving support from these contractors to act as contracting officer’s representatives for specific services being provided. For other contracts, contracting officers usually directly appoint contracting officer’s representatives or contracting officer’s technical representatives to monitor contractor performance at the deployed locations. These individuals are typically drawn from units receiving contractor-provided services, are not normally contracting specialists, and for whom contract monitoring is an additional duty. They cannot direct the contractor by making commitments or changes that affect price, quantity, quality, delivery, or other terms and conditions of the contract. Instead, they act as the eyes and ears of the contracting officer and serve as the liaison between the contractor and the contracting officer. The FAR requires contract administration offices to perform all actions necessary to verify whether contracted services conform to contract quality requirements and to maintain records of these actions. The Defense Federal Acquisition Regulation Supplement (DFARS) adds an additional requirement for DOD agencies to conduct quality audits to ensure the quality of services meets contractual requirements. Oversight begins with trained personnel being nominated for and assigned oversight responsibilities, and then conducting oversight actions throughout the contract performance period to ensure the government receives the services required by the contract. In addition to the FAR and DFARS, a DOD best practices guide stresses proper documentation. The Guidebook for Performance-Based Services Acquisition in the Department of Defense states that an assessment of contractor performance should be documented, whether acceptable or unacceptable, as it is conducted and this official record may be considered past performance information. A wide selection of contract types is available to the government and contractors to provide needed flexibility in acquiring supplies and services. The contract types are grouped into two broad categories: (1) fixed price and (2) cost reimbursement. The specific contract types range from firm-fixed-priced, in which the contractor has full responsibility for the performance cost and any resulting profit or loss, to cost-plus-fixed- fee, in which the contractor has minimal responsibility for the performance costs and the negotiated fee (profit) is fixed. In between are the various incentive contracts, in which the contractor’s responsibility for the profit or fee incentives offered is tailored to the uncertainties involved in contract performance. One such contract type that provides incentives on the basis of performance is cost-plus-award-fee. A cost-plus-award-fee contract is a cost reimbursement contract that provides a fee (base amount plus an award amount) sufficient to motivate the contractor to excel in areas such as quality and timeliness. The amount of the award fee is based on the government’s evaluation of the contractor’s performance in terms of the contract criteria. Another contract type is indefinite- delivery/indefinite-quantity which provides for an indefinite quantity of supplies or services, within stated limits, during the contract period and the government places orders for individual requirements. As shown in table 1, most of the contracts we reviewed were cost-plus-fixed-fee type contracts. Two of the contracts were cost-plus-award-fee contracts. We looked at specific contracts that provide a variety of services. While some of these contracts have ended, DOD continues to acquire these services through other contracts. For example, the linguist contract ended in June 2008, but another contract valued at $4.6 billion was awarded to provide linguist services in Iraq for 5 years. The base operations support and security services contracts ended in March 2008 but two new bridge contracts for these services were awarded. The bridge contracts were for 1 year each and provided for continued operations and security services while bid protests were being decided. For six of the seven contracts we reviewed, actual costs exceeded the initially estimated contract costs, primarily because of added requirements to support ongoing operations in Iraq and Afghanistan. The actual costs for the other contract we reviewed did not exceed the estimated contract costs. The cost increases occurred primarily because as operations in Iraq and Afghanistan expanded, there were increased demands for services already established under the contracts and, in some cases, new requirements were added to the contracts. Other factors that contributed to individual contract cost growth among the contracts we reviewed included short-term contract extensions, the government’s inability to provide promised equipment, changes in host country labor laws, and paying for work to be performed multiple times. For six of the contracts we reviewed, the cost of each contract exceeded the originally estimated contract cost, primarily because of increases in contract requirements from ongoing operations in Iraq and Afghanistan. Costs for these six contracts—three of which were extended—increased from an initial estimate of $783 million to an approximate actual total cost of $3.8 billion. In four of these cases, the individual contract’s actual cost exceeded the estimated cost by at least 300 percent. For example, the total cost of the base operations support contract exceeded the estimated contract cost by $122.4 million, or 481 percent. In another example, the estimated cost for the equipment maintenance contract in Qatar was $52.7 million for a 3-month base period and 10 option years. However, the total cost of the contract as of March 2008—which was during option year 8— was $471 million or 794 percent more than originally estimated for the entire contract. For the seventh contract, we found that the actual contract costs did not exceed the originally estimated costs. Table 2 below shows how total actual contract costs, including the cost of any extensions, compared to the original cost estimate. Although several factors increased the contract costs, the primary factor was additional requirements associated with ongoing operations in Iraq and Afghanistan. Expanding operations in Iraq and Afghanistan placed an increased demand for services already established under each of the seven contracts we reviewed. In addition, new requirements were added to some of the contracts. The following examples illustrate additional contract requirements due to ongoing operations in Iraq and Afghanistan and their impact on contract costs. In April 1999, the Army awarded a contract for linguist translation and interpretation services. According to the Army, the initial requirement was for about 180 linguists worldwide at an estimated cost of $19 million for 1 base year and 4 option years. Since the award of this contract, the linguist requirement grew and the Army awarded other contracts to provide linguist services. For example, we reviewed an indefinite- delivery/indefinite-quantity contract awarded in September 2004—an interim 6-month contract with two 3-month options to continue providing linguist services worldwide—with an estimated maximum cost of $400 million. The total actual cost for the first year of services for this contract was about $409.6 million. Linguist requirements under the interim contract were increased multiple times, which increased contract costs. For example, in February 2007 the linguist requirement supporting operations in Iraq and Afghanistan grew from 8,899 to 10,714 in response to the surge in the number of military forces deployed to these areas of operation. At this same time, the worldwide linguist requirement grew from 9,313 to 11,154. To accommodate the increasing requirements and the need to continue providing the services, the interim contract was modified to increase the maximum costs allowable and to extend the performance period. As of April 2008, the interim contract had been extended five times and the total cost of the contract was $2.2 billion. At that time, the requirements to support exercises in the United States and operations in Afghanistan and Guantanamo Bay were being provided under new contracts while the requirements to support operations in Iraq were still being provided under the interim contract. A new indefinite- delivery/indefinite-quantity contract for linguist services in Iraq took effect in June 2008 with a maximum cost for all orders under the contract of $4.6 billion for 5 years. In August 2000, the Army awarded this contract for maintenance and supply services of the Army Prepositioned Stocks (APS)—5 in Qatar. In addition to performing routine maintenance on the prepositioned stocks, the contractor was required to support contingency operations by receiving, repairing, maintaining, and temporarily storing equipment from other sources until it was needed. The contract award represented the base year requirements of certain contract line items to be performed for 3 months in 2000 at a total contract amount of $568,166. The contract had 10 single-year options available for full contract performance and the contractor’s total estimated cost for the base plus 10 option years was $52.7 million. At the end of the seventh option year, which was in November 2007, the total cost of the contract was $428.9 million, or $376.2 million more than originally estimated for the entire contract. According to the contracting officer, requirements within the scope of the contract increased in support of the global war on terror to include supporting operations in Iraq and Afghanistan, performing operations in Kuwait, repairing equipment, and supporting additional reimbursable customers, such as the 550th Signal Company, Area Support Group-Qatar, and Army Tank Automotive and Armaments Command’s tire assembly repair program. For example, in 2002 contractor resources were deployed to Kuwait to meet the requirement for immediate download and urgent maintenance of equipment flowing into Southwest Asia in support of operations in Iraq. Approximately $195.6 million was funded on the APS-5 contract for operations in Kuwait between 2002 and 2005. In another example, in January 2006 a requirement to produce tire wheel assemblies was added to the contract. The scope of this requirement was to provide a package of ready-to-use, preconfigured tires to reduce the workload at forward maintenance locations. As of March 2008, the total funded for the tire operation was $6.4 million. Moreover, at various times throughout the life of the contract, requirements were added for the resetting of Army Prepositioned Stocks. For example, in the third, fifth, sixth, and seventh option years, funding placed on the contract for the reset of equipment totaled $35 million, $9 million, $39 million, and $23 million, respectively. In October 2004, the Army issued this task order for equipment maintenance and supply services in Kuwait under an umbrella indefinite- delivery/indefinite-quantity contract for Global Maintenance and Supply Services. The contractor was required to provide maintenance, inspect and test equipment, operate a wash rack for agriculture cleaning, and perform various other maintenance functions depending on developing missions. The contractor estimated a total cost for a 10-month base period and four option years of $218.2 million. At the end of the second option year in September 2007, the total cost of the task order after modifications was about $581.5 million, $363.2 million more than the original estimate for the entire task order. According to the contracting officer, the magnitude of the requirements under the task order increased significantly after the task order was issued. This increase included growth in the quantity of equipment repaired and the number of customers served, new requirements for resetting and issuing Army prepositioned stock and operating tire assembly and repair and HMMWV refurbishment programs. For example, in May 2006, a major HMMWV refurbishment effort valued at approximately $33 million was added to the task order. According to contracting officials, the task order could be used to expeditiously provide the required HMMWV refurbishment capability. Likewise, in September 2005 a requirement was added to the task order for tire assembly and repair. As of March 2008, the total funding for the tire assembly and repair operation was approximately $16.6 million. In addition, according to the contracting officer, requirements for the resetting of Army prepositioned stocks were added within the scope of the task order. For example, in option years one and two, funding for the reset of equipment totaled approximately $54.2 million and $50.1 million, respectively. In February 2003 the Army awarded this contract to provide a full range of base support activities including public works; logistics; medical; food; and morale, welfare, and recreation services in support of an installation in Qatar. The contractor estimated a total cost of $25.4 million for the 9- month base period plus 4 option years. The total cost of the contract was approximately $147.8 million, $122.4 million more than the original cost estimate. According to contracting officials, this growth in requirements was due to changes in the planned use for the installation and an increase in major tenants such as the United States Central Command Forward Headquarters and Special Operations Command Central. For example, the installation increased its logistics support of a nearby Air Force base and supported the rest and relaxation program for military personnel deployed to Iraq and Afghanistan, providing morale, welfare, and recreation services and quality-of-life support to more than 300 soldiers per week. To meet the increased demands, additional contractor personnel were needed. For example, five Medical Supply Clerks were added to the medical services requirement and four employees were added to meet the change in requirements of the Public Works department. The contractor’s estimated total costs for these additional personnel were $95,706 and $887,120, respectively. In addition, the services provided under the contract grew as new requirements were added. For example, in September 2004 a new requirement for an installation fire department was added. According to the contractor’s cost estimate, the total cost for option years one through four (the requirement was added during option year one) to meet the requirement for fire department services was $10.7 million. In February 2003 the Army awarded this contract for base security services at Camp As Sayliyah, Qatar. The contractor was to intercept, deter, and prevent unauthorized personnel and instruments of damage and destruction from entering the installation. The contractor should also conduct surveillance and counter-surveillance of the installation’s perimeter and vicinity from designated observation towers and posts. The contractor estimated a total cost of $80.3 million for the 9-month base period plus 4 option years. The total cost of the contract was about $105.8 million, or $25.6 million more than originally estimated. According to the contracting officer, as was the case for the base operations support contract, changes in the planned use for the installation and an increase in major tenants such as the United States Central Command Forward Headquarters and Special Operations Command Central resulted in increased contract requirements. In some instances, additional personnel were needed to meet the requirements of the contract. For example, four guards and four screeners were added at a cost of $255,267 for option year one. In another example, in option year two, the required coverage at one guard tower was increased to 24 hours a day. Funding in the amount of $145,327 was provided to meet this requirement for the remainder of the option year. The contractor’s estimated cost for meeting this requirement in the remaining 2 option years was $690,880. In another example, in option year one a requirement was added for personnel to operate a mobile vehicle and cargo inspection system. This system consisted of a truck-mounted, nonintrusive gamma ray imaging system that x-rays the contents of trucks, containers, cargo, and passenger vehicles entering the base to determine the possible presence of various types of contraband. A total of $359,685 was provided to meet this requirement for the remainder of the option year. In May 2002 the Army awarded a contract that provided total logistics support for the Stryker vehicles fielded to two combat brigade teams. In September 2005 the Army modified the contract to add a requirement for the repair of battle-damaged Stryker vehicles in Qatar. Our review focused on the battle damage repair requirements performed in Qatar and the associated modifications. The initial requirement was for the repair of 11 battle-damaged vehicles at a cost of approximately $6.4 million. As of April 2008, the total cost of the battle damage repair facility in Qatar was approximately $95.1 million. According to officials at the Army Tank Automotive and Armaments Command, when the logistics support contract was modified to add the Qatar battle damage repair facility requirements, the Army and the contractor jointly developed and negotiated the requirements and cost estimates. As more Stryker vehicles sustained battle damage, additional modifications were added. For example, only a few days after this initial requirement was added to the contract, a modification was issued that increased the requirement by 15 vehicles, bringing the total number of battle-damaged vehicles to be repaired to 26. With this increased requirement, approximately $4.6 million in funding was added to the contract. According to Army officials, over time the number of vehicles that required repair increased as attacks on United States forces intensified and more Stryker brigades rotated in and out of Iraq and Afghanistan. The battle damage repair requirements are currently stated in terms of the number of vehicles that can be repaired per month. For example, in February 2006 the repair requirement increased from 2 vehicles every 45 days to 4 vehicles per month, and in July 2007 the requirement increased again to 6 vehicles per month. In February 2005 the Air Force awarded this contract for maintenance support of the Predator unmanned aircraft to support scheduled flying hours for a base period of 1 year with 2 option years. According to program officials, the contractor was required to provide organizational maintenance services such as base support of systems, weapons loading, launching, routine day-to-day flight maintenance, routine inspections, scheduled and unscheduled maintenance, and maintenance of supply and support packages. The estimated base and option year one contract cost was $49.7 million. At the end of option year one, which included an unanticipated 7-week extension, the total cost of the contract was approximately $49.3 million. While the total cost of the contract, including the cost of the extension, did not exceed the total estimated cost for the base and option year one, contract requirements changed in support of operations in Iraq and Afghanistan and the effect these changes had on the cost of the contract varied. For example, according to program officials, the contractor established support operations in Afghanistan in March 2005 and in Iraq three months later. In July 2005, contractor support in Iraq was increased to provide additional Predator surveillance at a cost of $2.5 million. Also, in June 2006 the contractor support in Afghanistan was moved to Iraq, resulting in a $2.3 million decrease in contract cost. Other factors also decreased contract costs and as a result, the total cost of the contract was less than initially estimated. For example, contract labor rates—which were negotiated and accepted after the contract was awarded—were lower than the rates used to calculate the estimated contract costs, resulting in a lower contract cost of approximately $1.8 million. Additionally, in August 2005 the cost of the contract was decreased by approximately $567,000 due to a 6-week delay in the start of the contract. Other factors that contributed to individual contract cost growth among the contracts we reviewed included (1) short-term contract extensions, (2) the government’s inability to provide promised equipment, (3) changes in host country labor laws, and (4) having to pay for work to be performed multiple times because it did not meet required standards. First, we found that in three of the contracts, short-term contract extensions increased costs because the contractor signed short-term leases which were more expensive than longer-term leases. The contractors felt it was too risky to obtain long-term leases for such things as vehicles and housing because there was no guarantee that the contract would be extended again. Each of these three contracts was extended for less than 1 year. In each instance, the extensions were to allow for the continuation of contractor services during protests of newly awarded contracts. For example, in April 2007 the linguist contract requirements were being performed under a 3- month extension due to protests of newly awarded linguist contracts. According to the linguist contractor, the short-term extensions diminished its ability to leverage leasing because a short-term lease commitment is more expensive than a longer, 1-year lease commitment. For example, the monthly cost for one contractor to lease trucks under a 6-month lease was $2,437, whereas the monthly cost under a 1-year lease was $1,700—a 30 percent savings. According to the contractor, short-term lease commitments also limit the contractor’s ability to shop around for better prices because most vendors want a longer commitment. Additionally, short-term extensions drain contractor resources and increase overhead costs because the contractor has to prepare cost proposals, review funding, and perform other administrative tasks every 90 or 120 days. While the contractors could enter into leases for a period longer than the specified contract period of performance, they would assume the risk for the cost of the excess months. In addition, in October 2007 the base operations and security services contract requirements were being performed under 6-month contract extensions. According to both the base operations contractor and the security services contractor, it was difficult to find housing that was available for a 6-month lease in Qatar due to the booming economy, and any lease term for fewer than 12-months was costly. For example, according to the security services contractor, the same 12-month housing lease that cost about $1,650 in 2007 cost about $4,100 in 2008. The officials added that, when available, a 6-month lease for the same housing averaged around $4,700 to $5,000. Second, for the linguist contract, additional costs were incurred when the government was unable to provide the equipment or services, which were to be government- furnished pursuant to the contract. The contract stated that contractor personnel providing support to the military in contingency operations may be required to wear protective equipment as determined by the supported commander. When required by the commander, the government will provide to the contractor all military-unique individual equipment. According to contracting officials, due to the large deployments of soldiers requiring protective equipment, there was an insufficient supply of equipment remaining for contractors. Contracting officials told us that when the government does not supply the equipment as provided for under the contract, the contractor is authorized to procure and be reimbursed for the cost of the equipment and the associated general and administrative expenses. When the contractor is paid for the equipment it becomes government property. According to the contracting officer’s representative, the contractor was able to purchase the equipment at military surplus stores at a cost to the government of approximately $600,000. In addition, contracting officials for this same contract told us that the government was to provide transportation for the contract manager; however, the government did not provide this transportation. As a result, the contractor leased a vehicle to provide this transportation and the government reimbursed the contractor and paid for the associated overhead expenses. Third, changes in the host country labor law resulted in additional security services contract costs. According to the contractor, a change in Qatar’s labor law directed that (1) employees could not work more than 10 hours in 1 day, including overtime, and (2) employees be given at least a 1-hour break after working for 5 hours. As a result, additional employees were required to provide 24-hour security coverage. The cost of providing this additional manpower in option year two was approximately $752,000. The contractor’s estimated cost for meeting this requirement in the remaining years (option years three and four) was approximately $2.5 million. The contractor also told us that a second change in Qatar’s labor law required workers to be paid for 1 day off a week. To comply with this change, employees were retroactively paid for the weekly day off from the effective date of the law change until their contract ended. The Army added $1.3 million in funding to the contract to assist with the retroactive pay for the paid day off. Fourth, according to contracting officials, under the two cost- reimbursable equipment maintenance contracts we reviewed, the government must continue to pay for additional work performed on equipment rejected for failure to meet the required maintenance standard. When equipment was presented to the government and did not pass quality assurance inspection, it was returned to the contractor for additional maintenance until it met the required standard. Contracting officials explained that under the cost-plus-fixed-fee maintenance provisions of the contracts, the contractor was reimbursed for all maintenance labor hours incurred, including labor hours associated with maintenance performed after the equipment was rejected because it did not meet specified maintenance standards. This resulted in additional costs to the government. As we reported in January 2008, our analysis of Army data for a task order under one of these contracts in Kuwait found that since May 2005, the contractor worked a total of about 188,000 hours to repair equipment after the first failed government inspection, at an approximate cost to the government of $4.2 million. We were unable to calculate the total cost of the rework performed under the second equipment maintenance contract because, according to officials, information entered into the maintenance database that tracks equipment status and inspection results does not distinguish between the contractor’s internal quality control inspections and government inspections prior to acceptance. DOD’s oversight of some of the contracts we reviewed has been inadequate because of a shortage of qualified oversight and contract administration personnel and because it did not maintain some contract files in accordance with applicable policy and guidance. We have previously reported that inadequate numbers of trained contract management and oversight staff have led to contracting challenges. We found that for five of the seven contracts we reviewed, DOD did not have adequate numbers of qualified personnel at deployed locations to effectively manage and oversee the contracts. Additionally, we found that for four of the contracts we reviewed, the contracting offices either did not maintain complete contract files documenting contract administration and oversight actions taken or did not follow quality assurance guidance. For the other two contracts we reviewed, authorized oversight positions were filled with personnel to properly oversee the contracts. Having the right people with the right skills to oversee contractor performance is critical to ensuring that DOD receives the best value for the billions of dollars spent each year on contractor-provided services supporting forces deployed in southwest Asia and elsewhere. However, inadequate numbers of personnel to oversee and manage contracts is a long-standing problem that continues to hinder DOD’s management and oversight of contractors in deployed locations. In 2004, we reported that DOD did not always have sufficient contract oversight personnel in place to manage and oversee its logistics support contracts such as LOGCAP and recommended that DOD develop teams of subject matter experts to make periodic visits to deployed locations to judge, among other things, if its logistics support contracts were being used efficiently. DOD concurred with—but did not implement—this recommendation. In addition, in 2005 we reported in our High-Risk Series that inadequate staffing contributed to contract management challenges in Iraq. In 2006, we reported that oversight personnel told us that DOD does not have adequate personnel at deployed locations to effectively oversee and manage contractors. DOD concurred with our assessment and noted that they were congressionally directed to undertake a review of the health of the acquisition work forces, including oversight personnel, and assess the department’s ability to meet the oversight mission. Currently, DOD has completed a competency analysis of its work force but has not determined what number of oversight personnel will be needed to provide adequate oversight for contingency contracting. Our review of the staff authorized to provide contract oversight and management revealed similar vacancies in some critical oversight and administration positions for five of the seven contracts, as illustrated by the following examples. The APS-5 contract did not have an administrative contracting officer for almost a year. Oversight of contracts ultimately rests with the contracting officer, who has the responsibility for ensuring that contractors meet the requirements set forth in the contract. However, most contracting officers are not located at the deployed location. As a result, contracting officers often appoint administrative contracting officers to provide day-to-day oversight and management of the contractor at the deployed location. The administrative contracting officer is a certified contracting officer with specialized training and experience. Administrative contracting officers may be responsible for many duties including ensuring contractor compliance with contract quality assurance requirements, approving the contractor’s use of subcontractors, reviewing the contractor’s management systems, reviewing and monitoring the contractor’s purchasing system, and ensuring that government personnel involved with contract management have the proper training and experience. According to the contracting officer, while the administrative contracting officer’s position was vacant, she acted as the administrative contracting officer; however, she was located in the United States and the place of performance for this contract was in Qatar. The APS-5 contract also lacked a property administrator for more than a year. According to a DOD manual, the responsibilities of the property administrator include administering the contract clauses related to government property in the possession of the contractor, developing and applying a property systems analysis program to assess the effectiveness of contractor government property management systems, and evaluating the contractor’s property management system to ensure that it does not create an unacceptable risk of loss, damage, or destruction of property. While some property administrator duties are often delegated to the administrative contracting officer, this contracting office was also without an administrative contracting office for several months. As such, important property administration duties were not being performed including the proper accounting for government-owned contractor- acquired equipment. As of April 2008, the contract administration office responsible for administering the base operations and support and the base security contracts in Qatar only had 12 of its 18 authorized positions. The 6 vacant positions included a performance evaluation specialist, 3 contracting specialists, 1 cost analyst, and 1 procurement analyst. Four of the positions had been vacant for 7 months or more, while 2 had been vacant for 4 and 6 months, despite the fact that the Army designated both as key positions. According to position descriptions provided by the Army, the performance evaluation specialist is a technical quality expert who advises the commander on quality issues. Moreover, the performance evaluation specialist is responsible for the Army’s quality assurance program for the two contracts in Qatar. This includes developing a quality assurance plan, monitoring contractor performance, training junior quality assurance personnel, analyzing quality data for trends, and providing input on the contractor’s performance for the award fee board. This position requires a certified quality assurance professional. While some of these duties were performed by the administrative contracting officer, other duties need specialized skills that administrative contracting officers generally do not have. Contract specialists perform a wide variety of pre- and post-award tasks encompassing complex acquisition planning, contract type selection, contract formation and execution, cost of price analysis, contract negotiation, and contract administration including reviewing monthly contractor invoices. According to the contracting officer’s representative, he was responsible for providing the technical assessment of the contractor’s performance and reviewing contractor invoices, a responsibility for which he said that he was not trained. He also said that the invoices required closer scrutiny than he was able to give them and he often did not know if the invoices included valid expenses or not. In addition, the contracting officer’s representative had oversight responsibilities for five additional contracts and his primary assignment as the base’s Provost Marshal did not always allow him time to complete his contract oversight responsibilities. The procurement analyst, among other things, is responsible for developing cost/pricing data, proposals, and counter-proposals for use in negotiations; analyzing contractor proposals to determine reasonableness; determining appropriateness and reasonableness of proposed labor and overhead rates; and developing data for use in pricing trend analyses. What made these vacancies even more critical is that during this time the contracting office awarded two, 1-year contracts to continue providing the base security and base operations services. According to the contracting officer, it was difficult to find qualified candidates to fill some of the vacancies, and in the fall of 2007 the Army rejected a number of applicants because they did not have the right skills. The contracting officer for the Global Maintenance and Supply Services in Kuwait—Task Order and the APS-5 contract said that her office was understaffed, which made it difficult to keep up with some contract administrative requirements. For example, she said that more staff would allow her office to properly handle the deobligation of funds against contracts. In January 2008, we reported that (1) the contract management oversight team was inadequately staffed to effectively oversee the Global Maintenance and Supply Services in Kuwait—Task Order 0001, (2) the 401st Army Field Support battalion was concerned about its ability to administer cost-plus-award-fee provisions, and (3) the battalion was not meeting Army Quality Program requirements due in part to lack of oversight and contract management staff. Specifically, we reported that there were not enough trained oversight personnel to effectively oversee and manage the task order. We also reported that as of April 2007 four oversight personnel positions were vacant, including two military quality assurance inspectors and two civilian positions—a quality assurance specialist and a property administrator. Due to the vacant property administrator position, some proper accounting of government-owned equipment was not performed. The Army agreed with our recommendation that it take steps to fill the vacant oversight positions and Army Sustainment Command officials told us that steps were being taken to fill the vacant oversight positions with qualified personnel. According to the officials, 16 military personnel were assigned to the battalion to help provide contract oversight in maintenance, supply, transportation, and operations—8 of which would be assigned to maintenance. In addition, the officials stated that the quality assurance specialist and property administrator positions had both been announced numerous times and several offers had been declined. The property administrator position was filled in March 2008; however, as of June 2008 the quality assurance specialist position was still vacant. For the linguist contract, officials responsible for the contract said (1) there were not enough contracting officer’s technical representatives to effectively oversee the contract and (2) the representatives spent more time ensuring the contractor met its responsibilities concerning employees’ pay, uniforms, and other things than they did performing the full range of contract oversight actions. According to contracting officials, in February 2007 there were 7 contracting officer’s technical representatives providing oversight for about 8,300 linguists in 120 locations across Iraq and Afghanistan. In one case, a single oversight person was responsible for linguists stationed at more than 40 different locations spread throughout the theater of operations. The officials also said that one theater commander restricted travel within the area of operations during some time of the contract. This travel restriction limited the ability of oversight personnel to perform adequate contract oversight. In addition, oversight officials stated that when they did have the opportunity to visit a forward operating location, they often spent their time focusing on contractor personnel issues such as ensuring that the contractor paid the foreign national linguists on time and as agreed to in their contracts. Oversight officials also cited the following difficulties in performing contract oversight: (1) determining what support the government is supposed to provide to the contractor, (2) getting deployed units to provide support such as subsistence and transportation to the assigned linguists and (3) inexperience of unit commanders in working with contractors. In March 2008, after awarding four new contracts for linguist services, the Army increased the number of alternate contracting officer’s representatives in Iraq and Afghanistan from 7 to 14 in an effort to improve oversight. For the other two contracts we reviewed, authorized oversight positions were filled. For the Stryker contract, the Program Manager-Stryker Brigade Combat Team provided overall contract management and the Defense Contract Management Agency provided contract administration and oversight services for the battle damage repair effort in Qatar. The Defense Contract Management Agency had a designated administrative contracting officer in Kuwait, who also served as the quality assurance evaluator. The quality assurance evaluator traveled to Qatar and performed final inspection of repaired vehicles prior to accepting them for the government. He also performed periodic in-process inspections during his visits to Qatar, as his schedule allowed. Oversight for the Predator contract was performed by the quality assurance group within the Air Combat Command Program Management Squadron. According to Air Force officials, the Predator quality assurance team consisted of a superintendent quality assurance evaluator and 16 additional quality assurance evaluators. One full-time evaluator was located in Iraq while the others were located at Creech Air Force Base, Nevada. The quality assurance evaluators worked full time to ensure that the contractor’s maintenance of the Predator met contract specifications. According to Air Force officials, based on a risk analysis, one evaluator was sufficient to provide oversight in Iraq. The quality assurance evaluators planned their oversight inspections using a monthly contract surveillance audit plan provided by the quality assurance department. At the end of each month, the evaluators in Iraq and at Creech prepared a report that described the results of site audits, technical inspections, any deficiencies identified, the status of corrective action requests, other action items, and an overall summary of the business relationship with the contractor. We found that contracting offices and oversight activities did not always follow policy and guidance for maintaining contract files or established quality assurance principles. According to the FAR, unless otherwise specified, the contract administration office shall maintain suitable records reflecting the nature of quality assurance actions as part of the performance records of the contract. The regulation states that organization of the contract files must be sufficient to ensure the files are readily accessible to principal users and, if needed, a locator system should be established to ensure the ability to locate promptly any contract files. In addition, a DFARS policy, guidance and instruction states that the basis for all award fee determinations should be documented in the contract file. However, for three of the contracts we reviewed—including two award fee contracts—the contracting officers could not provide documents supporting contract administration and oversight actions taken. Specifically, for the base operations support, security services, and APS-5 contracts, we asked the contracting offices to provide documentation from the contract files related to past oversight actions, including any records of corrective actions. Contracting officials said that they could not identify records of oversight actions taken because corrective action requests and other such documentation of contractor performance either were not maintained in the contract files or were maintained in such a manner that the current contracting officer could not locate them and was unaware of their existence. As a result, incoming contracting officers and contract administration personnel said they were unable to identify whether there were recurring contractor performance issues. Some of the contracting office personnel with whom we spoke stated that previous contracting office personnel had not properly documented and maintained all contract actions; however, they could not explain why, given that this occurred prior to their assignments. For the base operations support and security services contracts, we also asked for documents related to the Army’s decision concerning award fees to the contractors; however, the contracting office personnel were unsure whether or how quality assurance evaluations were previously analyzed and used to assess the contractor’s performance for purposes of determining the award fee it received. According to DOD’s guidebook for performance-based service acquisitions, an assessment of contractor performance should be documented, whether acceptable or unacceptable, as it is conducted and this official record may be considered past performance information. As we reported in January 2008, the Army did not always document unacceptable performance for the Global Maintenance and Supply Services in Kuwait—Task Order 0001. We reported that the Army did not always document deficiencies identified during quality assurance inspections despite the requirement to do so in the battalion’s quality and contract management procedures. Instead, quality assurance inspectors allowed the contractor to fix some deficiencies without documenting them in an attempt to prevent a delay in getting the equipment up to standard to pass inspection. We found a similar situation with the APS-5 contract for equipment maintenance in Qatar. We also found that the regulation governing the Army quality program stated that management of a comprehensive quality program requires subject matter practitioners with quality expertise. However, according to oversight officials, assigned contract oversight personnel for the linguist contract were unable to judge the performance of the contractor employees because they were generally unable to speak the languages of the contractor employees they were responsible for overseeing. The officials stated that this prevented the government from assessing linguist quality and identifying ways to improve contractor performance. We asked how the Army could ensure the linguists were properly translating and interpreting information if the quality assurance personnel could not speak the language in question. Agency officials responded that they thoroughly reviewed and validated the contractor’s methodology for determining if the linguists spoke the language and met the proficiency standards. They further stated that if they had people available who could speak the different languages needed, they would not need contract linguists. Similar to our findings, the Army Inspector General reported in October 2007 that shortages of contracting officers, quality assurance personnel, and technically proficient contracting officer’s representatives were noticeable at all levels. Without adequate levels of qualified oversight personnel, complete and organized contract files, and consistent implementation of quality assurance principles, DOD’s ability to perform the various tasks needed to monitor contractor performance may be impaired. Additionally, until DOD is able to obtain reasonable assurance that contractors are meeting their contract requirements efficiently and effectively, it will be unable to make fully informed decisions related to award fees as well as additional contract awards. Our selection of contracts did not allow us to project our findings across the universe of DOD contracts for services that support contingency operations. However, given that we identified inadequate oversight and administration staff levels for five of the seven contracts, and in four of the contracts we identified a failure to follow guidance for contract file maintenance or quality assurance principles, we believe the potential for these weaknesses exists in other DOD contracts. As we previously stated, some of the contracts we reviewed have ended; however, DOD continues to acquire those services through new contracts that are managed by the same contract oversight and administration offices and processes. As such, it is likely the weaknesses we identified continue to exist in the new contracts. While we could not determine the cost effect of inadequate oversight, as we have previously reported inadequate oversight may have some negative cost implications. Unless DOD can determine that inadequate oversight and insufficient staff are not a problem on other contracts for services to support contingency operations, the potential for waste exists DOD-wide. DOD uses contractors to support contingency operations for several reasons, including the need to compensate for a decrease in the size of the force and a lack of expertise within the military services. For the seven contracts we reviewed, DOD decided to use a contractor rather than DOD personnel because sufficient numbers of military personnel and DOD civilians were not available or the available personnel did not have the required skills. For five of the seven contracts, DOD lacked sufficient personnel to meet increased requirements for services to support operations in Iraq and Afghanistan. For example, one contract we reviewed was for organizational-level maintenance of the Predator unmanned aerial system. In fiscal year 2002, Congress provided the Air Force $1.6 billion to acquire 60 additional unmanned Predator aircraft; however, according to Air Force documents, it did not have the additional 1,409 personnel needed to maintain these new assets. As a result, the Air Force decided to use contractors to support the additional aircraft. In another example, the contracting officer for a contract that provides maintenance of prepositioned Army equipment and supply services in Qatar told us that these services are contracted out because there were insufficient military personnel to maintain the equipment. According to the official, while maintenance personnel maintain their unit’s equipment, they are not available to maintain all prepositioned equipment in a location such as Qatar. We also reviewed a similar equipment maintenance and supply services contract in Kuwait. According to the contracting officer, who is the same for both the Qatar and Kuwait contracts, contractors are used to provide the services in Kuwait because no military personnel were available to meet the requirements during the required time frame and the maintenance effort had previously only been performed by contractors. Additionally, contracting office officials for the security services and base operations support contracts in Qatar told us that contractors provide these services because there are not enough military personnel available to perform the work. For the two other contracts we reviewed, DOD did not have the personnel with specific skill sets to meet the missions. For example, regarding the contract that provides linguist interpretation and translation services for deployed units, Army officials told us that, the Army does not have enough military personnel who can speak the various required languages. In February 2007, the contract requirement was for over 11,000 linguists in over 40 different languages and dialects. According to Army officials, years ago the military did not anticipate such a large requirement for Arabic speakers. As a result, it phased out many interpreter military occupational specialties, thereby creating the shortfall. The officials said the requirements for language skills change over time and it is very difficult to forecast what language skills and what number of personnel with those skills will be needed in the future. Similarly, our review of a contract that provided total logistics support for the Stryker program found that these services were contracted out because DOD did not have people with the specific skills to perform this type of repair. According to Army officials, the development, production, and fielding of the Stryker vehicles were done concurrently and as a result, total logistics support had to be contracted out because at that time no organic capability had been established within the military to maintain the vehicles. After the contract was in place, the Army identified a need for the rapid repair of battle- damaged Stryker vehicles in order to restore combat capability. This requirement was added to the existing logistics support contract. According to Army officials, the decision was made to contract for the repair of battle-damaged Army Stryker vehicles because DOD did not have people with the specific skills to perform this type of repair. Moreover, the officials also stated that the military will never have an organic military capability to repair battle-damaged vehicles as any extensive structural damage typically requires specific welding experience. In May 2007 we reported that DOD and service officials attributed the increased use of contractors for support services to several factors, including (1) increased operations and maintenance requirements from the global war on terror and other contingencies, which DOD has met without an increase in active duty and civilian personnel; (2) federal government policy, which is to rely on the private sector for needed commercial services that are not inherently governmental in nature; and (3) DOD initiatives, such as its competitive sourcing and utility privatization programs. We also reported that officials stated the increased use of contractor support to help meet expanded mission support work has certain benefits. For example, they said the use of contractors allows uniformed personnel to be available for combat missions, obtaining contractor support in some instances can be faster than hiring government workers, it is generally easier to terminate or not renew a contract than to lay off government employees when operations return to normal, and contractors can provide support capabilities that are in short supply in the active and reserve components, thus reducing the frequency and duration of deployments for certain uniformed personnel. Furthermore, according to other GAO, DOD, and RAND reports, the department also uses contractors because of its need to deploy weapon systems before they are fully developed, and the increasingly complex nature of DOD weapon systems. For example, in a 2005 report that examined the Army’s use of contractors on the battlefield, RAND reported that DOD’s decision to field equipment still in development delays the date at which maintenance work can be performed in-house and extends the time the Army needs contractor personnel because it has not had the time to develop any internal capability. Additionally, in October 2007 DOD reported that the increasing technical complexity of DOD weapons systems and equipment requires a level of specialized technical expertise of limited scope, which DOD does not believe can be cost-effectively serviced and supported by a military force capability, resulting in the use of contractors. While contractors provide valuable support to contingency operations, we have frequently reported that long-standing DOD contract management and oversight problems, including DOD’s failure to follow contract management and oversight policy and guidance, increase the opportunity for waste and make it more difficult for DOD to ensure that contractors are meeting contract requirements efficiently, effectively, and at a reasonable price. Lack of effective oversight over the large number of contracts and contractors raises the potential for mismanagement of millions of dollars of these obligations. As we previously stated, some of the contracts we reviewed have ended, however, DOD continues to acquire these services through new contracts that are managed by the same contract oversight and administration offices and processes. As such, it is likely the weaknesses we identified continue to exist in the new contracts. Our selection of contracts did not allow us to project our findings across the universe of DOD contracts for services that support contingency operations. However, given that we identified inadequate oversight and administration staff levels for five of the seven contracts, and in four of the contracts we identified a failure to follow guidance for contract file maintenance or quality assurance principles, we believe the potential for these weaknesses exists in other DOD contracts supporting contingency operations. To ensure that DOD is able to exercise effective oversight over the contracts we reviewed, we recommend that the Secretary of Defense direct the Secretary of the Army to take the following three actions: develop a plan to adequately staff oversight positions with qualified personnel, take steps to determine why guidance for maintaining contract files is not consistently being followed and implement a corrective action plan, and ensure that quality assurance principles are consistently implemented. We also recommend that the Secretary of Defense direct each of the service secretaries to conduct a review of the contract administration functions that support contingency operations contracts to determine the prevalence of inadequate contract oversight and administration staffing levels and the extent to which guidance for maintaining contract files and quality assurance principles are not being consistently followed and take corrective actions as necessary. In commenting on a draft of this report, DOD concurred with each of our recommendations and stated that the Army was well aware of the problems we identified. In response to our recommendation that the Army develop a plan to adequately staff oversight positions for the contracts we reviewed with qualified personnel, DOD stated that the Army established the Gansler Commission to review lessons learned in recent operations and provide recommendations to improve effectiveness, efficiency, and transparency for future military operations. The Gansler Commission recommended that the Army contracting workforce be increased by 1,400 personnel. DOD stated that the Army established three new contracting commands that should enhance the focus on contractor oversight and that concept plans to support an increase in contract personnel were being staffed. While the Army’s actions should be viewed as positive steps, increasing the workforce and establishing three new contracting commands will not address, in the near term, the Army’s inadequate oversight personnel on the specific contracts we reviewed. We continue to believe that the Army should ensure that currently authorized oversight positions are filled with qualified personnel. If the concept plans include provisions for filling currently vacant authorized oversight positions with qualified personnel, then the Army’s actions should address our recommendation. In response to our recommendation that the Army take steps to determine why guidance for maintaining contract files is not consistently being followed and implement a corrective action plan, DOD stated that contract files are reviewed for compliance and completeness during all Army Procurement Management Reviews of Army contracting activities and that the Army found that a checklist should be developed. We believe that developing a checklist may be beneficial for identifying information that should be in contract files. However, this may not address the issue of why existing guidance for contract file maintenance, which already identifies what should be included in the files, is not being followed. In response to our recommendation that the Army ensure that quality assurance principles are consistently implemented, DOD stated that it has stressed the requirement to prepare quality assurance surveillance plans for all service contracts greater than $2,500 to ensure systematic quality assurance methods are used. While having a quality assurance surveillance plan can be beneficial to consistent implementation of quality assurance principles, most of the contracts we reviewed had a quality assurance surveillance plan, yet quality assurance principles were not consistently implemented. For example, the Global Maintenance and Supply Services Contract in Kuwait had a quality assurance surveillance plan that required documentation of contractor performance. However, as we reported, the Army did not always document unacceptable contractor performance. Because of our concern that the problems we identified may exist in other contingency contracts, we recommended that the service secretaries conduct a review of contract administration functions that support contingency operations contracts to determine the prevalence of inadequate oversight and administration staffing levels and the extent to which guidance for maintaining contract files and quality assurance principles is not being consistently followed and take corrective actions as necessary. In response, DOD stated that it has taken several initiatives to position itself for future operations, including increasing staffing dedicated specifically to contracting in expeditionary operations. While these actions may enhance future contracting for expeditionary operations, they will not address potential problems with active contracts. Additionally, authorized oversight positions in deployed locations need to be filled with qualified personnel to provide contractor oversight. We believe existing active contracts still need to be reviewed to address the problems we identified. DOD’s comments are reprinted in appendix II. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staff has any questions regarding this report, please contact me at (202) 512-8365 or solisw@gao.gov. Key contributors to the report are listed in appendix III. To conduct our work, we selected and conducted a case study review of a nonprobability sample of 7 Department of Defense (DOD) contracts for services that support deployed forces. Since a complete list of contracts was not available, we developed a list from which to select our case studies in two steps. First, we developed criteria for such a list of contracts (including task orders) awarded by DOD and its components that included the following: the contract supports deployed forces; Operations and Maintenance (O&M) funds are used to pay for the contract services; the principal place of performance is within the United States Central Command’s Area of Operation (i.e., 50 percent or greater); the contract is to maintain a weapons system(s) and/or provide support, including base support, but not for reconstruction and commodities; the award date of the contract is after October 2002; the contract was still in effect as of December 12, 2006; and the contractor is U.S. based. We provided this list of criteria to DOD which provided us with a list of 34 contracts, some of which did not meet the criteria. Second, we generated a short list of 8 contracts to supplement those provided by DOD based on our research and experience from prior work. We selected our nonprobability sample of 7 contracts from these two lists combined. The selected contracts provided various services such as base operations support, security, vehicle maintenance, and linguist services for case study review. Factors that influenced the case study selection included the extent of work we may have done on a contract during previous GAO reviews, type of contract service provided, location where the contractor’s work was performed, and contract dollar amount. Our selection of contracts does not allow us to project our findings across the universe of DOD contracts for services that support deployed forces. To determine why selected contracts supporting deployed forces experienced cost growth, we reviewed available contract requirements and funding documents and interviewed contracting office officials. When available, we compared the initially estimated annual contract costs with the actual annual contract costs to determine if the annual contract costs were different than initially anticipated. If there was a difference between annual contract costs and the initially estimated contract costs, we reviewed contract modification documents, contractor proposals, and other contract documents, and spoke with contracting office and contractor representatives to determine what led to the change in cost. We also spoke with representatives of the contractor to obtain their views related to changing contract requirements and the impact the changes had on contract costs. To determine the extent to which DOD provided oversight of contracts that support contingency operations, we reviewed a variety of quality assurance and contract management regulations and guidance, including the Federal Acquisition Regulation, the Defense Federal Acquisition Regulation Supplement, the Army Quality Program regulation, and DOD’s Guidebook for Performance-Based Services Acquisition in the Department of Defense. We met with contracting and quality assurance officials, and reviewed oversight and surveillance plans and inspection records. In addition, we spoke with representatives of the contractor and reviewed data provided by the contractor. We also observed physical inspections of the services provided for two contracts and toured operation areas for two other contracts. We spoke with oversight and contracting office officials to discuss the extent to which the contract management and oversight teams were adequately staffed to perform administration and oversight activities. While guidance was not available on the appropriate number of personnel needed to monitor contractors in a deployed location, we relied on the judgments and views of contracting office and contract oversight personnel as to the adequacy of staffing. To determine why the department uses contractors to support contingency operations, we interviewed contracting office officials and reviewed available documentation related to the decision to use contractors instead of military or DOD civilian personnel for the contracts. We also reviewed prior GAO work and DOD studies to determine if the basis of the decisions for the seven contracts we reviewed were consistent those used to make past decisions to contract for services across DOD. We did not, however, compare the cost of contractors versus military personnel or make policy judgments as to whether the use of contractors is desirable. We conducted this performance audit from November 2006 through August 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Carole Coffey, Assistant Director; Sarah Baker, Renee Brown, Larry Junek, Ronald La Due Lake, Katherine Lenane, Susan Mason, Connie W. Sawyer, Jr., and Karen Thornton made key contributions to this report.
The Department of Defense (DOD) uses contractors to meet many of its logistical and operational support needs. With the global war on terrorism, there has been a significant increase in deployment of contractor personnel to areas such as Iraq and Afghanistan. In its fiscal year 2007 report, the House Appropriations Committee directed GAO to examine the link between the growth in DOD's operation and maintenance costs and DOD's increased reliance on service contracts. GAO determined (1) the extent to which costs for selected contracts increased and the factors causing the increases, (2) the extent to which DOD provided oversight for selected contracts, and (3) the reasons for DOD's use of contractors to support contingency operations. To address these objectives, GAO reviewed a nonprobability sample of seven DOD contracts for services that provide vital support to contingency operations in Iraq and Afghanistan. GAO reviewed contract requirements, funding documents and DOD guidance for these contracts and interviewed DOD and contractor personnel. Costs for six of the seven contracts GAO reviewed increased from an initial estimate of $783 million to about $3.8 billion, and one consistent and primary factor driving the growth was increased requirements associated with continued military operations in Iraq and Afghanistan. For example, the Army awarded a $218.2 million task order for equipment maintenance and supply services in Kuwait in October 2004. Since then, approximately $154 million of additional work was added to this task order for vehicle refurbishment, tire assembly and repair, and resetting of prepositioned equipment. Other factors that increased individual contract costs include the use of short-term contract extensions and the government's inability to provide contractually required equipment and services. For example, in three of the contracts GAO reviewed, short-term contract extensions (3 to 6 months) increased costs because the contractor felt it was too risky to obtain long-term leases for vehicles and housing. The actual cost of one contract we reviewed did not exceed the estimated cost for reasons such as lower than projected labor rates. GAO has frequently reported that inadequate staffing contributed to contract management challenges. For some contracts GAO reviewed, DOD's oversight was inadequate because it had a shortage of qualified personnel and it did not maintain some contract files in accordance with applicable guidance. For five contracts, DOD had inadequate management and oversight personnel. In one case, the office responsible for overseeing two contracts was short 6 of 18 key positions, all of which needed specialized training and certifications. In addition, for two other contracts, proper accounting of government owned equipment was not performed because the property administrator position was vacant. Second, DOD did not always follow guidance for maintaining contract files or its quality assurance principles. For four contracts, complete contract files documenting administration and oversight actions taken were not kept and incoming personnel were unable to determine how contract management and oversight had been performed and if the contractor had performed satisfactorily prior to their arrival. In addition, oversight was not always performed by qualified personnel. For example, quality assurance officials for the linguist contract were unable to speak the language so they could not judge the quality of the contractor's work. Without adequate levels of qualified oversight personnel, proper maintenance of contract files, and consistent implementation of quality assurance principles, DOD may not be able to determine whether contractors are meeting their contract requirements, which raises the potential for waste. DOD used contractors to support contingency operations for several reasons, including the need to compensate for a decrease in force size and a lack of capability within the military services. For example, an Army contract for linguist services had a requirement for more than 11,000 linguists because DOD did not have the needed linguists. According to Army officials, the Army phased out many interpreter positions years ago and did not anticipate a large need for Arabic speakers.
NRC’s implementation of a risk-informed, performance-based regulatory approach for commercial nuclear power plants is complex and will require many years to fully implement. It requires basic changes to the regulations and NRC’s processes to ensure the safe operation of these plants. NRC faces a number of challenges to develop and to implement this process. For example, because of the complexity of this change, the agency needs a strategy to guide its development and implementation. We recommended such a strategy in March 1999. We suggested that a clearly defined strategy would help guide the regulatory transformation if it described the regulatory activities NRC planned to change to a risk-informed approach, the actions needed to accomplish this transformation, and the schedule and resources needed to make these changes. NRC initially agreed that it needed a comprehensive strategy, but it has not developed one. As one NRC Commissioner said in March 2000, “we really are . . . inventing this as we go along given how much things are changing, it’s very hard to plan even 4 months from now, let alone years from now.” NRC did develop the Risk-Informed Regulation Implementation Plan, which includes guidelines to identify, set priorities for, and implement risk-informed changes to regulatory processes. The plan also identifies specific tasks and projected milestones. The Risk-Informed Regulation Implementation Plan is not as comprehensive as it needs to be, because it does not identify performance measures, the items that are critical to achieving its objectives, activities that cut across its major offices, resources, or the relationships among the more than 40 separate activities (25 of which pertain to nuclear plants). For example, risk-informing NRC’s regulations will be a formidable task because they are interrelated. Amending one regulation can potentially affect other regulations governing other aspects of nuclear plant operations. NRC found this to be the case when it identified over 20 regulations that would need to be made consistent as it developed a risk- informed approach for one regulation. NRC expects that its efforts to change its regulations applicable to nuclear power plants to focus more on relative risk will take 5 to 8 years. NRC has compounded the complexity of moving to a new regulatory approach by deciding that compliance with such an approach will be voluntary. As a result, NRC will be regulating with two different systems— one for those utilities that choose to comply with a risk-informed approach and another for those that choose to stay with the existing regulatory approach. It is not clear how this dual system will be implemented. One part of the new risk-informed approach that has been implemented is a new safety oversight process for nuclear power plants. It was implemented in April 2000; and since then, NRC’s challenge has been to demonstrate that the new approach meets its goal of maintaining the same level of safety as the old approach, while being more predictable and consistent. The nuclear industry, states, public interest groups, and NRC staff have raised questions about various aspects of the process. For example, the industry has expressed concern about some of the performance indicators selected. Some NRC staff are concerned that that the process does not track all inspections issues and NRC will not have the information available, should the public later demand accountability from the agency. Furthermore, it is very difficult under the new process to assess those activities that cut across all aspects of plant operations— problem identification and resolution, human performance, and safety conscious work environment. In June 2001, NRC staff expect to report to the Commission on the first year of implementation of the new process and recommend changes, where warranted. NRC is facing a number of difficulties inherent in applying a risk-informed regulatory approach for nuclear material licensees. The sheer number of licensees—almost 21,000—and the diversity of the activities they conduct—converting uranium, decommissioning nuclear plants, transporting radioactive materials, and using radioactive material for industrial, medical, or academic purposes—increase the complexity of developing a risk-informed approach that would adequately cover all types of licensees. For example, the diversity of licensees results in varying levels of analytical sophistication; different experience in using risk- informed methods, such as risk assessments and other methods; and uneven knowledge about the analytical methods that would be useful to them. Because material licensees will be using different risk-informed methods, NRC has grouped them by the type of material used and the regulatory requirements for that material. For example, licensees that manufacture casks to store spent reactor fuel could be required to use formal analytical methods, such as a risk assessment. Other licensees, such as those that use nuclear material in industrial and medical applications, would not be expected to conduct risk assessments. In these cases, NRC staff said that they would use other methods to determine those aspects of the licensees’ operations that have significant risk, using an approach that considers the hazards (type, form, and quantity of material) and the barriers or physical and administrative controls that prevent or reduce exposure to these hazards. Another challenge associated with applying a risk-informed approach to material licensees is how NRC will implement a new risk-informed safety and safeguards oversight process for fuel cycle facilities. Unlike commercial nuclear power plants, which have a number of design similarities, most of the 10 facilities that prepare fuel for nuclear reactors perform separate and unique functions. For example, one facility converts uranium to a gas for use in the enrichment process, two facilities enrich or increase the amount of uranium-235 in the gas, and five facilities fabricate the uranium into fuel for commercial nuclear power plants. These facilities possess large quantities of materials that are potentially hazardous (i.e., explosive, radioactive, toxic, and/or combustible) to workers. The facilities’ diverse activities makes it particularly challenging for NRC to design a “one size fits all” safety oversight process and to develop indicators and thresholds of performance. In its recently proposed new risk-informed safety oversight process for material licensees, NRC has yet to resolve such issues as the structure of the problem identification, resolution, and corrective action program; the mechanics of the risk- significance determination process; and the regulatory responses that NRC would take when changes in performance occur. NRC had planned to pilot test the new fuel cycle facility safety oversight process in fiscal year 2001, but staff told us that this schedule could slip. NRC also faces challenges in redefining its role in a changing regulatory environment. As the number of agreement states increases beyond the existing 32, NRC must continue to ensure the adequacy and consistency of the states’ programs as well as its own effectiveness and efficiency in overseeing licensees that are not regulated by the agreement states. NRC has been working with the Conference of Radiation Control Program Directors (primarily state officials) and the Organization of Agreement States to address these challenges. However, NRC has yet to address the following questions: (1) Would NRC continue to need staff in all four of its regional offices as the number of agreement states increases? (2) What are the appropriate number, type, and skills for headquarters staff? and (3) What should NRC’s role be in the future? Later this month, a NRC/state working group expects to provide the Commission with its recommended options for the materials program of the future. NRC wants to be in a position to plan for needed changes because in 2003, it anticipates that 35 states will have agreements with NRC and that the states will oversee more than 85 percent of all material licensees. Another challenge NRC faces is to demonstrate that it is meeting one of its performance goals under the Government Performance and Results Act— increasing public confidence in NRC as an effective regulator. There are three reasons why this will be difficult. First, to ensure its independence, NRC cannot promote nuclear power, and it must walk a fine line when communicating with the public. Second, NRC has not defined the “public” that it wants to target in achieving this goal. Third, NRC has not established a baseline to measure the “increase” in its performance goal. In March 2000, the Commission rejected a staff proposal to conduct a survey to establish a baseline. Instead, in October 2000, NRC began an 18-month pilot effort to use feedback forms at the conclusion of public meetings. Twice a year, NRC expects to evaluate the information received on the forms to enhance its public outreach efforts. The feedback forms that NRC currently plans to use will provide information on the extent to which the public was aware of the meeting and the clarity, completeness, and thoroughness of the information provided by NRC at the meetings. Over time, the information from the forms may show that the public better understands the issues of concern or interest for a particular plant. It is not clear, however, how this information will show that public confidence in NRC as a regulator has increased. This performance measure is particularly important to bolster public confidence as the industry decides whether to submit a license application for one or more new nuclear power plants. The public has a long history with the traditional regulatory approach and may not fully understand the reasons for implementing a risk-informed approach and the relationship of that approach to maintaining plant safety. In a highly technical and complex industry, NRC is facing the loss of a significant percentage of its senior managers and technical staff. For example, in fiscal year 2001, about 16 percent of NRC staff are eligible to retire, and by the end of fiscal year 2005, about 33 percent will be eligible. The problem is more acute at the individual office level. For example, within the Office of Nuclear Reactor Regulation, about 42 percent of the technical staff and 77 percent of senior executive service staff are eligible for retirement. During this period of potentially very high attrition, NRC will need to rely on that staff to address the nuclear industry’s increasing demands to extend the operating licenses of existing plants and transfer the ownership of others. Likewise, in the Office of Nuclear Regulatory Research, 49 percent of the staff are eligible to retire at the same time that the nuclear industry is considering building new plants. Since that Office plays a key role in reviewing any new plants, if that Office looses some of its highly-skilled, well-recognized research specialists to retirement, NRC will be challenged to make decisions about new plants in a timely way, particularly if the plant is an untested design. In its fiscal year 2000 performance plan, NRC identified the need to maintain core competencies and staff as an issue that could affect its ability to achieve its performance goals. NRC noted that maintaining the correct balance of knowledge, skills, and abilities is critical to accomplishing its mission and is affected by various factors. These factors include the tight labor market for experienced professionals, the workload as projected by the nuclear industry to transfer and extend the licenses of existing plants, and the declining university enrollment in nuclear engineering studies and other fields related to nuclear safety. In October 2000, NRC’s Chairman requested the staff to develop a plan to assess the scientific, engineering, and technical core competencies that NRC needs and propose specific strategies to ensure that the agency maintains that competency. The Chairman noted that maintaining technical competency may be the biggest challenge confronting NRC. In January 2001, NRC staff provided a suggested action plan for maintaining core competencies to the Commission. The staff proposed to begin the 5-year effort in February 2001 at an estimated cost of $2.4 million, including the costs to purchase software that will be used to identify the knowledge and skills needed by NRC. To assess how existing human capital approaches support an agency’s mission, goals, and other organizational needs, we developed a human capital framework, which identified a number of elements and underlying values that are common to high-performing organizations. NRC’s 5-year plan appears to generally include the human capital elements that we suggested. In this regard, NRC has taken the initiative and identified options to attract new employees with critical skills, developed training programs to meets its changing needs, and identified legislative options to help resolve its aging staff issue. The options include allowing NRC to rehire retired staff without jeopardizing their pension payments and to provide salaries comparable to those paid in the private sector. In addition, for nuclear reactor and nuclear material safety, NRC expects to implement an intern program in fiscal year 2002 to attract and retain individuals with scientific, engineering, and other technical competencies. It has established a tuition assistance program, relocation bonuses, and other inducements to encourage qualified individuals not only to accept but also to continue their employment with the agency. NRC staff say that the agency is doing the best that it can with the tools available to hire and retain staff. Continued oversight of NRC’s multiyear effort is needed to ensure that it is being properly implemented and is effective in achieving its goals. Mr. Chairman and Members of the Subcommittee, this concludes our statement. We would be pleased to respond to any questions you may have.
This testimony discusses the challenges facing the Nuclear Regulatory Commission (NRC) as it moves from its traditional regulatory approach to a risk-informed, performance-based approach. GAO found that NRC's implementation of a risk-informed approach for commercial nuclear power plants is a complex, multiyear undertaking that requires basic changes to the regulations and processes NRC uses to ensure the safe operation of these plants. NRC needs to overcome several inherent difficulties as it seeks to apply a risk-informed regulatory approach to the nuclear material licensees, particularly in light of the large number of licensees and the diversity of activities they conduct. NRC will have to demonstrate that it is meeting its mandate (under the Government Performance and Results Act) of increasing public confidence in NRC as an effective regulator. NRC also faces challenges in human capital management, such as replacing a large percentage of its technical staff and senior managers who are eligible to retire. NRC has developed a five-year plan to identify and maintain the core competencies it needs and has identified legislative options to help resolve its aging staff problem.
Virtual currencies are financial innovations that have grown in number and popularity in recent years. While there is no statutory definition for virtual currency, the term refers to a digital representation of value that is not government-issued legal tender. Unlike U.S. dollars and other government-issued currencies, virtual currencies do not necessarily have a physical coin or bill associated with their circulation. While virtual currencies can function as a unit of account, store of value, and medium of exchange, they are not widely used or accepted. Some virtual currencies can only be used within virtual economies (for example, within online role-playing games) and may not be readily exchanged for government-issued currencies such as U.S. dollars, euro, or yen. Other virtual currencies may be used to purchase goods and services in the real economy and can be converted into government-issued currencies through virtual currency exchanges. In previous work, we described the latter type of virtual currencies as “open flow.” Open-flow virtual currencies have received considerable attention from federal financial regulatory and law enforcement agencies, in part because these currencies interact with the real economy and because depository institutions (for example, banks and credit unions) may have business relationships with companies that exchange virtual currencies for government-issued currencies. Throughout the remainder of this report, we use the term virtual currencies to mean open-flow virtual currencies, unless otherwise stated. Virtual currency systems, which include protocols for conducting transactions in addition to digital representations of value, can either be centralized or decentralized. Centralized virtual currency systems have a single administering authority that issues the currency and has the authority to withdraw the currency from circulation. In addition, the administrating authority issues rules for use of the currency and maintains a central payment ledger. In contrast, decentralized virtual currency systems have no central administering authority. Validation and certification of transactions are performed by users of the system and therefore do not require a third party to perform intermediation activities. A prominent example of a decentralized virtual currency system is bitcoin. Bitcoin was developed in 2009 by an unidentified programmer or programmers using the name Satoshi Nakamoto. According to industry stakeholders, bitcoin is the most widely circulated decentralized virtual currency. The bitcoin computer protocol permits the storage of unique digital representations of value (bitcoins) and facilitates the assignment of bitcoins from one user to another through a peer-to-peer, Internet-based network. Each bitcoin is divisible to eight decimal places, enabling their use in any kind of transaction regardless of the value. Users’ bitcoin balances are associated with bitcoin addresses (long strings of numbers and letters) that use principles of cryptography to help safeguard against inappropriate tampering with bitcoin transactions and balances. When users transfer bitcoins, the recipient provides their bitcoin address to the sender, and the sender authorizes the transaction with their private key (essentially a secret code that proves the sender’s control over their bitcoin address). Bitcoin transactions are irrevocable and do not require the sender or receiver to disclose their identities to each other or a third party. However, each transaction is registered in a public ledger called the “blockchain,” which maintains the associated bitcoin addresses and transaction dates, times, and amounts. Users can define how much additional information they require of each other to conduct a transaction. According to industry observers, examples of technologies used to increase the privacy of participants in virtual currency transactions include (1) anonymizing networks, which use a distributed network of computers to conceal the real Internet address of users, such as The Onion Router (TOR); (2) “tumblers” such as BitcoinBath and BitLaundry that combine payments from multiple users to obstruct identification through the blockchain; and (3) alternative virtual currencies such as Zerocoin and Anoncoin that aim to make transactions fully anonymous. the identities of participants in bitcoin transactions. In addition, researchers have developed methods to determine identities of parties involved in some bitcoin transactions by analyzing clusters of transactions between specific addresses. By design, there will be a maximum of 21 million bitcoins in circulation once all bitcoins have been mined, which is projected to occur in the year 2140. Once all bitcoins have been mined, miners will be rewarded for solving the math problems that verify the validity of bitcoin transactions through fees rather than bitcoins. directly or use third-party payment processors that take payments in bitcoins from buyers and provide businesses the payments in the form of a traditional currency or a combination of bitcoins and traditional currency. Figure 1 shows various ways that individuals can obtain and spend bitcoins. http://blockchain.info. (Accessed on Mar. 31, 2014.) Due to data limitations, it is difficult to calculate the velocity, or the rate at which bitcoins are spent, and the number of transactions between unique users in a given time period. https://blockchain.info. (Accessed on Apr. 1, 2014.) million commercial Automated Clearing House (a traditional payment processor) transactions per day in 2013. While bitcoin is the most widely used virtual currency, numerous others have been created. For example, dozens of decentralized virtual currencies are based on the bitcoin protocol such as Litecoin, Auroracoin, Peercoin, and Dogecoin. Similar to the bitcoin market, the size of the market for these virtual currencies is unclear. However, as of March 31, 2014, the total reported value of each of these currencies was less than $400 million (ranging from about $33 million for Dogecoin to about $346 million for Litecoin). Other virtual currencies that have been created are not based on the bitcoin protocol. One of the more prominent examples is XRP, which is used within a decentralized payment system called Ripple. Ripple allows users to make peer-to-peer transfers in any currency. A key function of XRP is to facilitate the conversion from one currency to another. For example, if a direct conversion between Mexican pesos and Thai baht is not available, the pesos can be exchanged for XRP, and then the XRP for baht. As of March 31, 2014, the total value of XRP was $878 million. Virtual currencies have drawn attention from federal agencies with responsibilities for protecting the U.S. financial system and its participants and investigating financial crimes. These include, but are not limited to, CFPB, CFTC, DHS, DOJ, SEC, Treasury, and the prudential banking regulators. The prudential banking regulators are the FDIC, Federal Reserve, NCUA, and OCC. Within Treasury, FinCEN has a particular interest in the emergence of virtual currencies because of concerns about the use of these currencies for money laundering and FinCEN’s role in Additionally, because virtual currencies (like combating such activity. government-issued currencies) can play a role in a range of financial and other crimes, including cross-border criminal activity, key components of DOJ and DHS have an interest in how virtual currencies are used. Relevant DOJ components include the Criminal Division (which oversees the Computer Crime and Intellectual Property Section and the Asset Forfeiture and Money Laundering Section), the FBI, and the Offices of the U.S. Attorneys (U.S. Attorneys). Relevant DHS components include the Secret Service and ICE-HSI. Money laundering is the process of disguising or concealing the source of funds acquired illicitly to make the acquisition appear legitimate. While federal agencies’ responsibilities with respect to virtual currency are still being clarified, some virtual currency activities and products have implications for the responsibilities of federal financial regulatory and law enforcement agencies. Virtual currencies have presented these agencies with emerging challenges as they carry out their different responsibilities. These challenges stem partly from certain characteristics of virtual currency systems, such as the higher degree of anonymity they provide compared with traditional payment systems and the ease with which they can be accessed globally to make payments and transfer funds across borders. Although virtual currencies are not government-issued and do not currently pass through U.S. banks, some activities and products that involve virtual currencies have implications for the responsibilities of federal financial regulatory and law enforcement agencies. These activities and products encompass both legitimate and illegitimate uses of virtual currencies. Examples of legitimate uses include buying virtual currencies and registered virtual-currency-denominated investment products. Examples of illegitimate uses include money laundering and purchasing illegal goods and services using virtual currencies. The goal of FinCEN administers BSA and its implementing regulations.BSA is to prevent financial institutions from being used as intermediaries for the transfer or deposit of money derived from criminal activity and to provide a paper trail to assist law enforcement agencies in their money laundering investigations. To the extent that entities engaged in money transmission conduct virtual currency transactions with U.S. customers or become customers of a U.S. financial institution, FinCEN has responsibilities for helping ensure that these entities comply with BSA and anti-money-laundering regulations. Under 31 C.F.R. § 1010.100(ff)(1)-(7), money services businesses are generally defined as any of the following: (1) currency dealer or exchanger, (2) check casher, (3) issuer or seller of traveler’s checks or money orders, (4) provider or seller of prepaid access, (5) money transmitter, and (6) the U.S. Postal Service. FinCEN’s regulations define a money transmitter as a person that provides money transmission services, or any other person engaged in the transfer of funds. 31 C.F.R. § 1010.100(ff)(5)(i).The term money transmission services means the “acceptance of currency, funds, or other value that substitutes for currency to another location or person by any means.” Id. services businesses are also required to monitor transactions and file reports on large currency transactions and suspicious activities. In addition, certain financial institutions must establish a written customer identification program that includes procedures for obtaining minimum identification information from customers who open an account, such as date of birth, a government identification number, and physical address. Further, financial institutions must file currency transaction reports on customer cash transactions exceeding $10,000 that include information about the account owner’s identity and occupation. FinCEN also supports the investigative and prosecutive efforts of multiple federal and state law enforcement agencies through its administration of the financial transaction reporting and recordkeeping requirements mandated or authorized under BSA. In addition, FinCEN has the authority to take enforcement actions, such as assessing civil money penalties, against financial institutions, including money services businesses, that violate BSA requirements. The prudential banking regulators—FDIC, Federal Reserve, NCUA, and OCC—provide oversight of depository institutions’ compliance with BSA and anti-money-laundering requirements. Therefore, these regulators are responsible for providing guidance and oversight to help ensure that depository institutions that have opened accounts for virtual currency exchanges or other money services businesses have adequate anti- In April 2005, FinCEN money-laundering controls for those accounts.and the prudential banking regulators issued joint guidance to banking organizations (depository institutions and bank holding companies) to clarify BSA requirements with respect to money services businesses and to set forth the minimum steps that banking organizations should take when providing banking services to these businesses. As part of safety and soundness or targeted BSA compliance examinations of depository institutions, the prudential banking regulators assess compliance with BSA and related anti-money-laundering requirements using procedures that are consistent with their overall risk-focused examination approach. In examining depository institutions for BSA compliance, the regulators review whether depository institutions (1) have developed anti-money- laundering programs and procedures to detect and report unusual or suspicious activities possibly related to money laundering; and (2) comply with the technical recordkeeping and reporting requirements of BSA. While most cases of BSA noncompliance are corrected within the examination framework, regulators can take a range of supervisory actions, including formal enforcement actions, against the entities they supervise for violations of BSA and anti-money-laundering requirements. These formal enforcement actions can include imposing civil money penalties and initiating cease-and-desist proceedings. CFPB is an independent entity within the Federal Reserve that has broad consumer protection responsibilities over an array of consumer financial products and services, including taking deposits and transferring money. CFPB is responsible for enforcing federal consumer protection laws, and it is the primary consumer protection supervisor over many of the institutions that offer consumer financial products and services. CFPB also has authority to issue and revise regulations that implement federal consumer financial protection laws, including the Electronic Fund Transfer Act and title X of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act). CFPB officials stated that they are reviewing how these responsibilities are implicated by consumer use (or potential consumer use) of virtual currencies. Other relevant CFPB responsibilities concerning virtual currencies include accepting and handling consumer complaints, promoting financial education, researching consumer behavior, and monitoring financial markets for new risks to consumers. For example, under authorities provided by the Dodd-Frank Act, CFPB maintains a Consumer Complaint Database and helps monitor and assess risks to consumers in the offering or provision of consumer financial products or services. CFPB also issues consumer advisories to promote clarity, transparency, and fairness in consumer financial markets. SEC regulates the securities markets—including participants such as securities exchanges, broker-dealers, investment companies, and investment advisers—and takes enforcement actions against individuals and companies for violations of federal securities laws. SEC’s mission is to protect investors; maintain fair, orderly, and efficient markets; and facilitate capital formation. Virtual currencies may have implications for a number of SEC responsibilities. For example, SEC has enforcement authority for violations of federal securities laws prohibiting fraud by any person in the purchase, offer, or sale of securities. SEC enforcement extends to virtual-currency-related securities transactions. Additionally, when companies offer and sell securities (including virtual-currency- related securities), they are subject to SEC requirements to either register the offering with SEC or qualify for a registration exemption. SEC reviews registration statements to ensure that potential investors receive adequate information about the issuer, the security, and the offering. Further, if a registered national securities exchange wanted to list a virtual-currency-related security, it could only do so if the listing complied with the exchange’s existing rules or the exchange had filed a proposed rule change with SEC to permit the listing. Virtual currencies may also have implications for other SEC responsibilities, as the following examples illustrate: SEC has examination authority for entities it regulates, including registered broker-dealers, to ensure compliance with federal securities laws, SEC rules and regulations, and BSA requirements. According to SEC officials, if a broker-dealer were to accept payments in virtual currencies from customers, this could raise potential anti- money-laundering issues that the broker-dealer would have to account for. SEC also regulates and has examination authority over investment advisers subject to its jurisdiction. Under the Investment Advisers Act of 1940, investment advisers are fiduciaries. To the extent that an investment adviser recommends virtual currencies or virtual- currency-related securities, the investment adviser’s federal fiduciary duty would govern this conduct. If registered broker-dealers held virtual currencies for their own account or an account of a customer, SEC would have to determine how to treat the virtual currencies for purposes of its broker-dealer financial responsibility rules, including the net capital rule. CFTC has the authority to regulate financial derivative products and their markets, including commodity futures and options. In addition, CFTC investigates and prosecutes alleged violations of the Commodity Exchange Act and related regulations. CFTC’s mission is to protect market users and the public from fraud, manipulation, abusive practices, and systemic risk related to derivatives subject to the Commodity Exchange Act. CFTC’s responsibilities with respect to virtual currencies depend partly on whether bitcoin or other virtual currencies meet the definition of a commodity under the Commodity Exchange Act. CFTC officials said the agency would not make a formal determination on this issue until market circumstances require one. According to CFTC, such circumstances could include virtual-currency derivatives emerging or being offered in the United States or CFTC becoming aware of the existence of fraud or manipulative schemes involving virtual currencies. The officials said that if prospective derivatives that are backed by or denominated in virtual currencies that CFTC determines to be commodities emerge, CFTC’s regulatory authorities would apply to those derivatives just as they would for any other derivative product subject to CFTC's jurisdiction. To carry out its regulatory responsibilities, CFTC would, among other things, evaluate the derivatives to ensure they were not susceptible to manipulation, review applications for new exchanges wishing to offer such derivatives, and examine exchanges offering these derivatives to ensure compliance with the applicable commodity exchange laws. Similar to SEC, CFTC has examination authority for BSA compliance—in this case directed at futures commission merchants and other futures market intermediaries—and acceptance of virtual currency payments by these entities could raise BSA compliance concerns.would also have to make determinations about the capital treatment of virtual currencies if these entities held virtual currencies for their own account or an account of a customer. Law enforcement agencies, including but not limited to DHS and DOJ component agencies and offices, have responsibilities to investigate a variety of federal crimes that may involve the use of virtual currencies and to support the prosecution of those who commit these crimes. Like traditional currencies, virtual currencies can facilitate a range of criminal activities, including fraud schemes and the sale of illicit goods and services, that may fall under the purview of federal law enforcement agencies. The emergence of virtual currencies has had particular significance for financial crimes. According to DOJ officials, the main law enforcement interests with respect to virtual currencies are to (1) deter and prosecute criminals who use virtual currency systems to launder money (that is, move or hide money that either facilitates or is derived from criminal or terrorist activities); and (2) investigate and prosecute virtual currency services that themselves violate money transmission and money laundering laws.FBI, ICE-HSI, and Secret Service, investigate financial crimes as part of their broader responsibilities. In addition, DOJ’s Asset Forfeiture and Money Laundering Section prosecutes money laundering violations, and DOJ and DHS manage the seizure and forfeiture of assets that represent the proceeds of, or were used to facilitate, federal crimes. Key laws that may apply to the use of virtual currencies in financial crimes include BSA, A number of DOJ and DHS components, including the as amended by Title III of the USA PATRIOT Act, and anti-money- laundering statutes. Additionally, because virtual currencies operate over the Internet, they have implications for agency components that investigate and prosecute computer crimes (also called cybercrimes). For example, DOJ’s Computer Crime and Intellectual Property Section stated that virtual currencies can be attractive to entities that seek to facilitate or conduct computer crimes over the Internet, such as computer-based fraud and identity theft. The section’s responsibilities include improving legal processes for obtaining electronic evidence and working with other law enforcement agencies in improving the technological and operational means for gathering and analyzing electronic evidence. The FBI, Secret Service, and ICE-HSI also investigate computer crimes. The emergence of virtual currencies presents challenges to federal agencies responsible for financial regulation, law enforcement, and consumer and investor protection. These challenges stem partly from certain characteristics of virtual currencies, such as the higher degree of anonymity they provide and the ease with which they can be sent across borders. In addition, the growing popularity of virtual currencies has highlighted both risks and benefits for agencies to consider in carrying out their responsibilities. As previously noted, some virtual currency systems may provide a higher degree of anonymity than traditional payment systems because they do not require the disclosure of personally identifiable information (that is, information that can be used to locate or identify an individual, such as names or Social Security numbers) to transfer funds from one party to another. When transferring funds in the amount of $3,000 or more between the bank accounts of two individuals, the banks involved are required by FinCEN regulations to obtain and keep the names and other information of the individuals, as well as information on the transaction itself. The customer identification information collected by the banks helps create a paper trail of financial transactions that law enforcement agencies can use to detect illegal activity, such as money laundering or terrorist financing, and to identify and apprehend criminals. However, in a transfer between two individuals using bitcoins (or a similar type of decentralized virtual currency) no personally identifiable information is necessarily disclosed either to the two individuals or a third-party intermediary. As a result, virtual currencies may be attractive to parties seeking to protect personally identifiable information, maintain financial privacy, buy or sell illicit goods and services, or move or conceal money obtained by illegal means. Further, virtual currency exchangers or administrators may be used to facilitate money laundering if they do not collect identifying information from customers and retain other transaction information. For these reasons, law enforcement and federal financial regulatory agencies have indicated that virtual currencies can create challenges for agencies in detecting unlawful actions and the entities that carry them out. For example, the FBI has noted that because bitcoin does not have a centralized entity to monitor and report suspicious activity and process legal requests such as subpoenas, law enforcement agencies face difficulty in detecting suspicious transactions using bitcoins and identifying parties involved in these transactions. Because they operate over the Internet, virtual currencies can be used globally to make payments and funds transfers across borders. In addition, according to agency officials, many of the entities that exchange traditional currencies for virtual currencies (or vice versa) are located outside of the United States. If these exchangers have customers located in the United States, they must comply with BSA and anti-money- laundering requirements. Due to the cross-jurisdictional nature of virtual currency systems, federal financial regulatory and law enforcement agencies face challenges in enforcing these requirements and investigating and prosecuting transnational crimes that may involve virtual currencies. For example, law enforcement may have to rely upon cooperation from international partners to conduct investigations, make arrests, and seize criminal assets. Additionally, violators, victims, and witnesses may reside outside of the United States, and relevant customer and transaction records may be held by entities in different jurisdictions, making it difficult for law enforcement and financial regulators to access them. Further, virtual currency exchangers or administrators may operate out of countries that have weak legal and regulatory regimes or that are less willing to cooperate with U.S. law enforcement. Virtual currency industry stakeholders have noted that virtual currencies present both risks and benefits that federal agencies need to consider in regulating entities that may be associated with virtual-currency-related activities. As previously noted, the risks include the attractiveness of virtual currencies to those who may want to launder money or purchase illicit goods and services. Another emerging set of risks involves consumer and investor protection—in particular, whether consumers and investors understand the potential drawbacks of buying, holding, and using virtual currencies or investing in virtual-currency-based securities. Consumers may not be aware of certain characteristics and risks of virtual currencies, including the following: Lack of bank involvement. Virtual currency exchanges and wallet providers are not banks. If they go out of business, there may be no specific protections like deposit insurance to cover consumer losses. Stated limits on financial recourse. Some virtual currency wallet providers purport to disclaim responsibility for consumer losses associated with unauthorized wallet access. In contrast, credit and debit card networks state that consumers have no liability for fraudulent use of accounts. Volatile prices. The prices of virtual currencies can change quickly and dramatically (as shown previously in fig. 2). Additionally, an SEC official told us that virtual-currency-based securities may be attracting individuals who are younger and less experienced than typical investors. The official expressed concern that younger investors may lack the sophistication to properly assess the risks of such investments and the financial resources to recover from losses on the investments, including losses resulting from fraud schemes. While virtual currencies present risks to consumers and investors, they also provide several potential benefits to consumers and business. Cost and speed. Decentralized virtual currency systems may, in some circumstances, provide lower transaction costs and be faster than traditional funds transfer systems because the transactions do not need to go through a third-party intermediary. The irrevocable feature of virtual currency payments may also contribute to lower transaction costs by eliminating the costs of consumer chargebacks. Industry stakeholders have noted that cost and time savings may be especially significant for international remittances (personal funds immigrants send to their home countries), which sometimes involve sizeable fees and can take several days. In addition, industry stakeholders have indicated that the potentially lower costs of virtual currency transactions—for example, relative to credit and debit cards—may facilitate the use of micropayments (very small financial transactions) as a way of selling items such as online news articles, music, and smartphone applications. Financial privacy. To the extent that bitcoin (or other virtual currency) addresses are not publicly associated with a specific individual, peer- to-peer virtual currency transactions can provide a greater degree of financial privacy than transactions using traditional payment systems, because no personally identifiable information is exchanged. Access. Because virtual currencies can be accessed anywhere over the Internet, they are a potential way to provide basic financial services to populations without access to traditional financial institutions, such as rural populations in developing countries.However, the potential benefit hinges on access to the Internet, which these populations may not have, and may be offset by the lack of protections against losses noted previously. Federal agency officials have acknowledged the need to consider both the risks and benefits of virtual currencies in carrying out their responsibilities. For example, the Director of FinCEN has testified that the emergence of virtual currencies has prompted consideration of vulnerabilities that these currencies create in the financial system and how illicit actors will take advantage of them. However, she also noted that innovation is an important part of the economy and that FinCEN needs to have regulation that mitigates concerns about illicit actors while minimizing regulatory burden. Similarly, the former Acting Assistant Attorney General for DOJ’s Criminal Division has testified that law enforcement needs to be vigilant about the criminal misuse of virtual currency systems while recognizing that there are many legitimate users of those services. Balancing concerns about the illicit use of virtual currencies against the potential benefits of these technological innovations will likely be an ongoing challenge for federal agencies. Federal financial regulators and law enforcement agencies have taken a number of actions related to the emergence of virtual currencies, including providing regulatory guidance, assessing anti-money-laundering compliance, and investigating crimes and violations that have been facilitated by the use of virtual currencies. However, interagency working groups addressing virtual currencies have not focused on consumer protection and have generally not included CFPB. FinCEN has taken a number of actions in recent years to establish and clarify requirements for participants in virtual currency systems. For example, in July 2011, FinCEN finalized a rule that modified the definitions of certain money services businesses. Among other things, the rule states that persons who accept and transmit currency, funds, or “other value that substitutes for currency,” are considered to be money transmitters. Additionally, in March 2013, FinCEN issued guidance that clarified the applicability of BSA regulations to participants in certain virtual currency systems. The FinCEN guidance classified virtual currency exchangers and administrators as money services businesses and, more specifically, as money transmitters. The guidance also specified that virtual currency users are not money services businesses. As a result, the guidance clarified that virtual currency exchangers and administrators must follow requirements to register with FinCEN as money transmitters; institute risk assessment procedures and anti- money-laundering program control measures; and implement certain recordkeeping, reporting, and transaction monitoring requirements, unless an exception to these requirements applies. According to FinCEN officials, as of December 2013, approximately 40 virtual currency exchangers or administrators had registered with FinCEN. In 2014, in response to questions from industry stakeholders, FinCEN issued administrative rulings to clarify the types of participants to which the March 2013 guidance applies. In January 2014, FinCEN issued rulings stating that the way in which a virtual currency is obtained is not material, but the way in which a person or corporation uses the virtual currency is. As a result, the rulings specify that two kinds of users are not considered money transmitters subject to FinCEN’s regulations: miners who use and convert virtual currencies exclusively for their own purposes and companies that invest in virtual currencies exclusively as an investment for their own account. However, the rulings specify that these two kinds of users may no longer be exempt from FinCEN’s money transmitter requirements if they conduct their activities as a business service for others. The rulings also note that transfers of virtual currencies from these types of users to third parties should be closely scrutinized because they may constitute money transmission. In April 2014, FinCEN issued another administrative ruling, which states that companies that rent computer systems for mining virtual currencies are not considered money transmitters subject to FinCEN’s regulations. FinCEN has also taken additional steps to help ensure that companies required to register as money services businesses under FinCEN’s March 2013 virtual currency guidance have done so. According to FinCEN officials, FinCEN has responded to letters from companies seeking clarification about their requirements. Also, officials told us that FinCEN has proactively informed other companies that they should register as money services businesses. As part of their oversight activities, NCUA and SEC have addressed situations involving virtual currencies, and other federal financial regulators have had internal discussions regarding virtual currencies. NCUA has had two supervisory situations in which credit unions were involved with activity related to virtual currencies. These situations emerged after reviews of credit unions found that their anti-money- laundering and antifraud measures needed to be revised in light of activity involving virtual currency exchanges. In 2013, NCUA issued a preliminary warning letter to a federal credit union that provided account services to money services businesses that also served as bitcoin exchanges. The warning letter was based on various conditions that NCUA determined could undermine the credit union’s stability. For example, the credit union did not have adequate anti-money-laundering controls in place for its money services business accounts. Further, the letter stated that the credit union should not have served money services businesses that were not part of the credit union’s strategic plan, and that serving these businesses was not consistent with the credit union’s charter, which called for serving the local community. The warning letter required the credit union to immediately cease all transactions with these money services business accounts and establish an appropriate BSA and anti-money-laundering infrastructure. As a result, the credit union ceased such activity and strengthened its BSA and anti-money- laundering compliance program. In 2012, NCUA provided support to a state regulator’s review of a credit union’s commercial customer. The state regulator found that this commercial customer was a payment processor—that is, a payment network that allows any business or person to send, request, and accept money—that had customers that were bitcoin exchanges. According to NCUA, the state regulator worked with the credit union to ensure that its BSA compliance program was adequate to monitor and address the risks associated with payment processors that serve bitcoin exchanges. The state regulator also worked to ensure that the payment processor’s risk management practices included sufficient antifraud and anti-money-laundering measures. The payment processor subsequently suspended all accounts that served virtual currency exchanges. In addition, SEC has taken enforcement action against an individual and entity that are alleged to have defrauded investors through a bitcoin- denominated Ponzi scheme.investor alerts, has begun to review a registration statement from an entity that wants to offer virtual-currency-related securities, and is monitoring for potential securities law violations related to virtual currencies. In addition, in March 2014, the Financial Industry Regulatory Authority, a self-regulatory organization for the securities industry, issued an investor alert about the risks of buying, using, and speculating in virtual currencies and the potential for related scams. See http://www.finra.org/Investors/ProtectYourself/InvestorAlerts/FraudsAndScams/P456458. Also, in April 2014, the North American Securities Administrators Association issued an investor advisory on virtual currencies, related investment risks, and the types of investments that might involve virtual currencies. See http://www.nasaa.org/30631/informed-investor-advisory-virtual-currency. 2014, addressed fraud and other investment risks related to virtual currencies. SEC staff have begun to review a registration statement from a company that wants to conduct a public offering of virtual-currency- related securities and has received notice of a company offering a private virtual-currency-related security, relying upon an exemption from registration. In July 2013, the Winklevoss Bitcoin Trust filed a registration statement for an initial public offering of its securities. The Trust is structured similarly to an exchange-traded fund and will hold bitcoins as its only assets. The Trust filed amended registration statements in October 2013 and February 2014, but the registration statement remains pending as of April 14, 2014, meaning that the Trust is not yet permitted to sell its securities in a public offering. Also, in October 2013, Bitcoin Investment Trust, a bitcoin-denominated pooled investment fund affiliated with SecondMarket, Inc. and available only to accredited investors, filed a notice with SEC indicating that it had sold securities in an exempt offering in reliance on Rule 506(c) of the Securities Act. Rule 506(c) allows an issuer to raise an unlimited amount of money, but imposes restrictions on who can invest in the offering and requires the issuer to take reasonable steps to verify that those investing are accredited investors. SEC staff are also monitoring the Internet and other sources, such as referrals from other agencies, for potential securities law violations involving bitcoin and other virtual currencies. Further, all of the federal financial regulatory agencies we interviewed have had internal discussions on how virtual currencies work and what implications the emergence of virtual currencies might have for their responsibilities. While agencies generally told us that their conversations have been informal and ad hoc, some efforts have been more organized: In 2013, the Federal Reserve took several steps to share information on virtual currencies among the Board of Governors and the 12 Federal Reserve Banks. Among other things, the Board of Governors’ BSA and anti-money-laundering specialist conference included a session focused on FinCEN’s virtual currency guidance and recent law enforcement actions. The Board of Governors also circulated general information about virtual currencies within the Federal Reserve System to use in answering questions from media and the public about virtual currencies and federal financial regulatory actions to date. In 2013, SEC formed an internal Digital Currency Working Group, which aims to foster information sharing internally and externally. According to SEC, the working group consists of approximately 50 members from among SEC’s divisions and offices. In 2012, FinCEN held three internal information-sharing events on virtual currencies. These events covered issues including how virtual currencies compare to traditional currencies and risks related to emerging payment systems such as virtual currencies. Law enforcement agencies have taken actions against parties involved in the illicit use of virtual currencies to facilitate crimes. These parties have included administrators and users of centralized virtual currency systems designed to facilitate money laundering or other crimes, parties who have used virtual currencies to buy or sell illicit goods and services online, and virtual currency exchanges and online payment processors operating without the proper licenses. In 2013 and 2014, law enforcement agencies took actions against Silk Road, a black market website that allegedly accepted bitcoin as the sole payment method for the purchase of illegal goods and services. The website contained over 13,000 listings for controlled substances as well as listings for malicious software programs, pirated media content, fake passports, and computer hacking services (see fig.3). The FBI; Drug Enforcement Administration (DEA); IRS; ICE-HSI; the Bureau of Alcohol, Tobacco, Firearms, and Explosives; the Secret Service; the U.S. Marshals Service; and Treasury’s Office of Foreign Assets Control investigated the case together, along with officials from New York as well as Australia, Iceland, Ireland, and France. In September and October 2013, law enforcement shut down the Silk Road website and seized approximately 174,000 bitcoins, which the FBI reported were worth approximately $34 million at the time of seizure. In February 2014, DOJ indicted Silk Road’s alleged owner and operator on charges including narcotics conspiracy, engaging in a continuing criminal enterprise, conspiracy to commit computer hacking, and money laundering conspiracy. In May 2013, law enforcement agencies seized the accounts of a U.S.-based subsidiary of Mt. Gox, a now-defunct Tokyo-based virtual currency exchange with users from multiple countries including the United States, on the basis that the subsidiary was operating as an unlicensed money services business. The seizure included U.S. bank accounts of Mt. Gox that were held by a private bank and Dwolla, an online payment processor that allegedly allowed users to buy and sell bitcoins on Mt. Gox. According to ICE-HSI, Mt. Gox had moved funds into numerous online black markets, the bulk of which were associated with the illicit purchase of drugs, firearms, and child pornography. At the direction of the U.S. Attorney’s office, ICE-HSI ordered Dwolla to stop all payments to Mt. Gox and seized $5.1 million from the Mt. Gox subsidiary’s U.S. accounts. Also in May 2013, law enforcement agencies shut down Liberty Reserve, a centralized virtual currency system that was allegedly designed and frequently used to facilitate money laundering and had its own virtual currency. Secret Service, ICE-HSI, and IRS investigated the case together, along with officials from 16 other countries. To shut down the site, FinCEN identified Liberty Reserve as a financial institution of primary money laundering concern under section 311 of the USA PATRIOT Act, effectively cutting it off from the U.S. financial system. DOJ then charged Liberty Reserve with operating an unlicensed money transmission business and with money laundering for facilitating the movement of more than $6 billion in illicit proceeds.$40 million in seizures and had resulted in the arrests of five individuals. As of April 2014, this investigation had produced In April 2013, law enforcement agencies filed a civil asset forfeiture complaint against Tcash Ads Inc., an online payment processor that allegedly enabled users to make purchases anonymously from virtual currency exchanges, with operating an unlicensed money services business. Additionally, law enforcement agencies seized the bank accounts of Tcash Ads Inc. The Secret Service worked on the case with FinCEN and DOJ’s Asset Forfeiture and Money Laundering Section. From October 2010 through November 2012, law enforcement agencies convicted three organizers of a worldwide conspiracy to use a network of virus-controlled computers that deployed e-mail spam designed to manipulate stock prices. The organizers paid the spammers $1.4 million for their illegal services via the centralized virtual currency e-Gold and wire transfers. Charges included conspiring to further securities fraud using spam, conspiring to transmit spam through unauthorized access to computers, and four counts of transmission of spam by unauthorized computers. Law enforcement agencies have also taken other actions to help support investigations involving the illicit use of virtual currencies, including the following examples. The FBI has produced numerous criminal intelligence products addressing virtual currencies. These intelligence products have generally focused on cases involving the illicit use of virtual currencies, ways in which virtual currencies have been or could be used to facilitate crimes, and the related challenges for law enforcement. The FBI shares these products with foreign, state, and local law enforcement partners as appropriate. Through standing bilateral agreements governing the exchange of law enforcement information, ICE-HSI is arranging meetings with various international partners to exchange intelligence and garner operational support on virtual currency issues. ICE-HSI also developed the Illicit Digital Economy Program, which aims to target the use of virtual currencies for money-laundering purposes by defining and organizing the primary facets of the digital economy, building internal capacity, training and developing agents and analysts, engaging other agencies, and promoting public-private partnerships. Federal agency efforts to collaborate on virtual currency issues have involved creating a working group specifically focused on virtual currency, leveraging existing interagency mechanisms, and sharing information through informal interagency channels. For example, in 2012, the FBI formed the Virtual Currency Emerging Threats Working Group (VCET), an interagency working group that includes other DOJ components, FinCEN, ICE-HSI, SEC, Secret Service, Treasury, and other relevant federal partners. The purpose of VCET is to leverage members’ expertise to address new virtual currency trends, address potential implications for law enforcement and the U.S. intelligence community, and mitigate the cross-programmatic threats arising from illicit actors’ use of virtual currency systems. The VCET meets about once every 3 months. Federal agencies have also begun to discuss virtual currency issues in existing interagency working groups that address broader topics such as money laundering, electronic crimes, and the digital economy, as follows: The BSA Advisory Group—which is chaired by FinCEN and includes the prudential banking regulators, Treasury, federal and state law enforcement and regulatory agencies, and industry representatives— has addressed virtual currency issues in a number of ways. In May 2013, FinCEN provided a briefing on bitcoin, and in December 2013 three stakeholders from the virtual currency industry gave presentations on their business models and regulatory challenges. In addition, the BSA Advisory Group invited a representative of the virtual currency industry to join the group in 2014. The Federal Financial Institutions Examination Council (FFIEC) Bank Secrecy Act/Anti-Money-Laundering Working Group—which is currently chaired by OCC and includes the prudential banking regulators and CFPB—is in the process of revising the current (2010) FFIEC BSA/Anti-Money Laundering Examination Manual.revisions related to virtual currencies may include information on FinCEN’s March 2013 guidance and regulatory expectations that depository institutions should undertake a risk assessment with a particular focus on the money laundering risks posed by new products and services. The Secret Service-sponsored Electronic Crimes Task Forces (ECTF) includes 35 Secret Service field offices; federal law enforcement agencies such as ICE-HSI; and members of the private sector, academia, and state and local law enforcement. This group’s mission is to prevent, detect, and investigate electronic crimes, including those involving virtual currency. This group has conducted computer forensics and other investigative activity on various virtual currencies and made arrests of individuals who have used virtual currencies as part of their criminal activities. This group has also held quarterly meetings on virtual currencies to discuss legal and regulatory issues and trends in crimes involving virtual currencies. The Digital Economy Task Force was established in 2013 by Thomson Reuters (a multinational media and information firm) and the International Centre for Missing & Exploited Children. This task force includes members from both the public and private sectors. Task force members from the federal government include representatives from the FBI, ICE-HSI, Secret Service, the Department of State, and the United States Agency for International Development. This group published a report in March 2014 on the benefits and challenges of the digital economy.continuing private and public research into the digital economy and illegal activities, investing in law enforcement training, rethinking investigative techniques, fostering cooperation between agencies, and promoting a national and global dialogue on policy related to virtual currencies. Among other things, the report recommended A number of other existing interagency working groups have discussed or addressed virtual currency issues to some extent. See appendix II for more information on these groups. Federal agencies have also started to collaborate outside of these working groups to help improve their knowledge of issues related to the emergence of virtual currencies and share pertinent information with various agencies. FinCEN and SEC have hosted meetings with industry representatives and consultants to discuss how virtual currency systems such as bitcoin and Ripple work and what legal, regulatory, technology, and law enforcement issues they present. These agencies have invited officials from other federal agencies to these sessions. FinCEN consulted with financial regulators and law enforcement agencies as it was formulating its March 2013 guidance on virtual currencies. These agencies included CFPB, CFTC, DEA, FBI, ICE- HSI, IRS, the prudential banking regulators, SEC, and the Secret Service. SEC notified CFTC of its review of the Winklevoss Bitcoin Trust registration statement. FinCEN issued a Networking Bulletin on cryptocurrencies in March 2013 to provide details to law enforcement agencies and assist them in following money moving between virtual currency channels and the traditional U.S. financial system. Among other things, the bulletin addressed the role of entities that facilitate the purchase and exchange of virtual currencies and the types of records these entities maintain that could be useful to investigative officials. Also, the Networking Bulletin elicited information from its recipients, which in turn helped FinCEN issue additional analytical products of a tactical nature to inform law enforcement operations. FinCEN has also shared this information with several regulatory and foreign financial intelligence unit partners. CFPB officials said they had recently conferred on virtual currency issues with a number of domestic and international regulators, including the Federal Reserve Bank of San Francisco, the Federal Trade Commission, NCUA, OCC, Treasury, New York State’s Department of Financial Services, and the European Banking Authority. In addition, the officials said they had met with industry participants on these issues and conferred with interested academic and consumer group stakeholders, as well as law firms, consultancies, and industry associations. Although there are numerous interagency collaborative efforts that have addressed virtual currency issues in some manner, interagency working groups have not focused on consumer protection issues. Rather, as previously discussed, these efforts have focused on BSA and anti-money- laundering controls and investigations of crimes in which virtual currencies have been used. In addition, CFPB’s involvement in interagency working groups that address virtual currencies has been limited. GAO’s key practices on collaboration state that it is important to include relevant participants in interagency collaborative efforts in order to ensure, among other things, that these participants contribute knowledge, skills, and abilities to the outcomes of the effort. In addition, these key practices state that once an interagency group has been established, it is important to reach out to potential participants who may have a shared interest in order to ensure that opportunities for achieving outcomes are not missed. CFPB might be a relevant participant in a broader set of collaborative efforts on virtual currencies because virtual currency systems provide a new way of making financial transactions, and CFPB’s responsibilities include ensuring that consumers have timely and understandable information to make responsible decisions about financial Further, CFPB’s strategic goals include helping consumers transactions.understand the costs, risks, and tradeoffs of financial decisions and surfacing financial trends and emergent risks relevant to consumers. Although interagency working groups addressing virtual currencies have not focused on consumer protection issues, recent events have highlighted the risks individuals face in buying and holding these currencies. For example, notable examples of bitcoin thefts by computer hackers have occurred in the past few years, including the theft of more than 35,000 bitcoins from a virtual wallet provider in April 2013 and 24,000 bitcoins from a bitcoin exchange in September 2012. More recently, in February 2014, Mt. Gox filed for bankruptcy, stating that a security breach resulted in the loss of 850,000 bitcoins, the vast majority of which belonged to its customers. These bitcoins were worth more than $460 million when Mt. Gox filed for bankruptcy. Mt. Gox subsequently reported that it had found 200,000 of these bitcoins in an unused virtual wallet. Certain parties have taken actions to inform consumers about the potential risks associated with virtual currencies, but these actions have occurred outside of federal interagency efforts and have not included CFPB. In April 2014, the Conference of State Bank Supervisors and the North American Securities Administrators Association issued joint model consumer guidance to assist state regulatory agencies in educating consumers about virtual currencies and the risks of purchasing, exchanging, and investing in virtual currencies. Additionally, from February through April 2014, a number of states issued consumer alerts about virtual currencies. On the international front, the European Banking Authority issued a warning to consumers in December 2013 about the risks involved in buying or holding virtual currencies. Federal interagency working groups addressing virtual currency issues have not focused on consumer protection, and CFPB has generally not participated in these groups, for a number of potential reasons. For example, the extent to which individuals using virtual currencies are speculative investors or ordinary consumers is unclear, and CFPB has received few consumer complaints about these currencies. incidents involving the use of virtual currencies for illicit purposes have made money laundering and other law enforcement issues primary concerns, and existing interagency working groups are primarily composed of agencies that share responsibilities for these matters. However, emerging consumer risks indicate that interagency collaborative efforts may need to place greater emphasis on consumer protection issues in order to address the full range of challenges posed by virtual currencies. Additionally, without CFPB’s participation, interagency working groups are not fully leveraging the expertise of the lead consumer financial protection agency, and CFPB may not be receiving information that it could use to assess the risks that virtual currencies pose to consumers. CFPB’s complaint intake system is not specifically geared towards virtual currency complaints. However, in February 2014, CFPB ran a query of its Consumer Complaint Database to determine the number of complaints that had mentioned virtual currency or bitcoin and found that only 14 out of about 290,000 complaints met that condition. evidenced by the loss or theft of bitcoins from exchanges and virtual wallet providers and consumer warnings issued by nonfederal and non- U.S. entities. However, federal interagency working groups addressing virtual currencies have thus far not emphasized consumer-protection issues, and participation by the federal government’s lead consumer financial protection agency, CFPB, has been limited. Therefore, these efforts may not be consistent with key practices that can benefit interagency collaboration, such as including all relevant participants to ensure that their knowledge, skills, and abilities contribute to the outcomes of the effort. As a result, future interagency efforts may not be in a position to address consumer risks associated with virtual currencies in the most timely and effective manner. To help ensure that federal interagency collaboration on virtual currencies addresses emerging consumer protection issues, we recommend that the Director of CFPB (1) identify which interagency working groups could help CFPB maintain awareness of these issues or would benefit from CFPB’s participation; and (2) decide, in coordination with the agencies already participating in these efforts, which ones CFPB should participate in. We provided a draft of this report to CFPB, CFTC, DOJ, DHS, FDIC, the Federal Reserve, NCUA, OCC, SEC, and Treasury for review and comment. CFPB and NCUA provided written comments, which are reprinted in appendixes III and IV. In addition, CFPB, CFTC, DHS, DOJ, the Federal Reserve, NCUA, OCC, SEC, and Treasury provided technical comments, which we incorporated into the report where appropriate. In its letter, CFPB concurred with our recommendation to identify and participate in pertinent interagency working groups addressing virtual currencies. CFPB stated that, to date, these groups have primarily focused on BSA concerns, anti-money-laundering controls, and the investigation of crimes involving virtual currencies. CFPB said that, as a result, its participation in these working groups has been limited. CFPB also stated that as consumer protection concerns have increased in recent months, its own work on virtual currencies and the work of other financial regulators in this area could benefit from a collaborative approach. In its letter, NCUA said that the report provides a clear discussion of the risks related to virtual currencies as well as a survey of current efforts in the regulatory community to address the related policy issues. NCUA also expressed support for increasing emphasis on consumer protection issues pertaining to virtual currencies. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to CFPB, CFTC, DOJ, DHS, FDIC, the Federal Reserve, NCUA, OCC, SEC, Treasury, interested congressional committees and members, and others. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or evansl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This appendix shows how bitcoins enter into circulation through “mining,” how transactions are conducted, and how miners verify transactions (see fig. 4). In this appendix, we present some of the interagency working groups (including task forces and other interagency collaborative bodies) that have discussed virtual currency issues, and in some cases, taken specific actions. This list is based on information we obtained from the federal financial regulatory and law enforcement agencies we met with and is not intended to be an exhaustive list. Lawrance L. Evans, Jr. (202) 512-8678 or evansl@gao.gov. In addition to the contact named above, Steve Westley (Assistant Director), Bethany Benitez, Chloe Brown, Anna Chung, Tonita Gillich, José R. Peña, and Robert Pollard made key contributions to this report. Also contributing to this report were Jennifer Schwartz, Jena Sinkfield, Ardith Spence, Andrew Stavisky, and Sarah Veale.
Virtual currencies—digital representations of value that are not government-issued—have grown in popularity in recent years. Some virtual currencies can be used to buy real goods and services and exchanged for dollars or other currencies. One example of these is bitcoin, which was developed in 2009. Bitcoin and similar virtual currency systems operate over the Internet and use computer protocols and encryption to conduct and verify transactions. While these virtual currency systems offer some benefits, they also pose risks. For example, they have been associated with illicit activity and security breaches, raising possible regulatory, law enforcement, and consumer protection issues. GAO was asked to examine federal policy and interagency collaboration issues concerning virtual currencies. This report discusses (1) federal financial regulatory and law enforcement agency responsibilities related to the use of virtual currencies and associated challenges and (2) actions and collaborative efforts the agencies have undertaken regarding virtual currencies. To address these objectives, GAO reviewed federal laws and regulations, academic and industry research, and agency documents; and interviewed federal agency officials, researchers, and industry groups. Virtual currencies are financial innovations that pose emerging challenges to federal financial regulatory and law enforcement agencies in carrying out their responsibilities, as the following examples illustrate: Virtual currency systems may provide greater anonymity than traditional payment systems and sometimes lack a central intermediary to maintain transaction information. As a result, financial regulators and law enforcement agencies may find it difficult to detect money laundering and other crimes involving virtual currencies. Many virtual currency systems can be accessed globally to make payments and transfer funds across borders. Consequently, law enforcement agencies investigating and prosecuting crimes that involve virtual currencies may have to rely upon cooperation from international partners who may operate under different regulatory and legal regimes. The emergence of virtual currencies has raised a number of consumer and investor protection issues. These include the reported loss of consumer funds maintained by bitcoin exchanges, volatility in bitcoin prices, and the development of virtual-currency-based investment products. For example, in February 2014, a Tokyo-based bitcoin exchange called Mt. Gox filed for bankruptcy after reporting that it had lost more than $460 million. Federal financial regulatory and law enforcement agencies have taken a number of actions regarding virtual currencies. In March 2013, the Department of the Treasury's Financial Crimes Enforcement Network (FinCEN) issued guidance that clarified which participants in virtual currency systems are subject to anti-money-laundering requirements and required virtual currency exchanges to register with FinCEN. Additionally, financial regulators have taken some actions regarding anti-money-laundering compliance and investor protection. For example, in July 2013, the Securities and Exchange Commission (SEC) charged an individual and his company with defrauding investors through a bitcoin-based investment scheme. Further, law enforcement agencies have taken actions against parties alleged to have used virtual currencies to facilitate money laundering or other crimes. For example, in October 2013, multiple agencies worked together to shut down Silk Road, an online marketplace where users paid for illegal goods and services with bitcoins. Federal agencies also have begun to collaborate on virtual currency issues through informal discussions and interagency working groups primarily concerned with money laundering and other law enforcement matters. However, these working groups have not focused on emerging consumer protection issues, and the Consumer Financial Protection Bureau (CFPB)—whose responsibilities include providing consumers with information to make responsible decisions about financial transactions—has generally not participated in these groups. Therefore, interagency efforts related to virtual currencies may not be consistent with key practices that can benefit interagency collaboration, such as including all relevant participants to ensure they contribute to the outcomes of the effort. As a result, future interagency efforts may not be in a position to address consumer risks associated with virtual currencies in the most timely and effective manner. GAO recommends that CFPB take steps to identify and participate in pertinent interagency working groups addressing virtual currencies, in coordination with other participating agencies. CFPB concurred with this recommendation.
Internal control is not one event, but a series of activities that occur throughout an entity’s operations and on an ongoing basis. Internal control should be recognized as an integral part of each system that management uses to regulate and guide its operations rather than as a separate system within an agency. In this sense, internal control is management control that is built into the entity as a part of its infrastructure to help managers run the entity and achieve their goals on an ongoing basis. Section 3512 (c), (d) of Title 31, U.S. Code, commonly known as the Federal Managers’ Financial Integrity Act of 1982 (FMFIA), requires agencies to establish and maintain internal control. The agency head must annually evaluate and report on the control and financial systems that protect the integrity of federal programs. The requirements of FMFIA serve as an umbrella under which other reviews, evaluations, and audits should be coordinated and considered to support management’s assertion about the effectiveness of internal control over operations, financial reporting, and compliance with laws and regulations. Office of Management and Budget (OMB) Circular No. A-123, Management’s Responsibility for Internal Control, provides the implementing guidance for FMFIA, and sets out the specific requirements for assessing and reporting on internal controls consistent with the internal control standards issued by the Comptroller General of the United States. The circular defines management’s responsibilities related to internal control and the process for assessing internal control effectiveness, and provides specific requirements for conducting management’s assessment of the effectiveness of internal control over financial reporting. The circular requires management to annually provide assurances on internal control in its performance and accountability report, and for each of the 24 Chief Financial Officers Act agencies to include a separate assurance on internal control over financial reporting, along with a report on identified material weaknesses and corrective actions. The circular also emphasizes the need for integrated and coordinated internal control assessments that synchronize all internal control-related activities. FMFIA requires GAO to issue standards for internal control in the federal government. The Standards for Internal Control in the Federal Government (i.e., internal control standards) provides the overall framework for establishing and maintaining effective internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. As summarized in the internal control standards, internal control in the government is defined by the following five elements, which also provide the basis against which internal controls are to be evaluated: Control environment: Management and employees should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. Risk assessment: Internal control should provide for an assessment of the risks the agency faces from both external and internal sources. Control activities: Internal control activities help ensure that management’s directives are carried out. The control activities should be effective and efficient in accomplishing the agency’s control objectives. Information and communications: Information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. Monitoring: Internal control monitoring should assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. A key objective in our annual audits of IRS’s financial statements is to obtain reasonable assurance that IRS maintained effective internal controls with respect to financial reporting, including safeguarding of assets, and compliance with laws and regulations. While we use all five elements of internal control as a basis for evaluating the effectiveness of IRS’s internal controls, our ongoing evaluations and tests have focused heavily on control activities to identify internal control weaknesses and offer recommendations for corrective action. Control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives. In other words, they are the activities conducted in the everyday course of business that are intended to accomplish a control objective, such as ensuring IRS employees successfully complete background checks prior to being granted access to taxpayer information and receipts. As such, control activities are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achievement of effective results. To accomplish our objectives, we evaluated the effectiveness of corrective actions IRS implemented during fiscal year 2008 in response to open recommendations as part of our fiscal years 2008 and 2007 financial audits. To determine the current status of the recommendations, we (1) obtained IRS’s reported status of each recommendation and corrective action taken or planned as of April 2009, (2) compared IRS’s reported status to our fiscal year 2008 audit findings to identify any differences between IRS’s and our conclusions regarding the status of each recommendation, and (3) performed additional follow-up work regarding IRS’s actions taken to address the open recommendations. In order to determine how these recommendations fit within IRS’s management and internal control structure, we compared the open recommendations and the issues that gave rise to them, to the control activities listed in the internal control standards and to the list of major factors and examples outlined in our Internal Control Management and Evaluation Tool. We also considered how the recommendations and the underlying issues were categorized in our prior reports; whether IRS had addressed, in whole or in part, the underlying control issues that gave rise to the recommendations; and other legal requirements and implementing guidance, such as OMB Circular No. A-123; FMFIA; and the Federal Information System Controls Audit Manual (FISCAM). Our work was performed from December 2008 through May 2009 in accordance with generally accepted government auditing standards. Further details on our audit scope and methodology are included in our report on the results of our audits of IRS’s fiscal years 2008 and 2007 financial statements. We requested comments on a draft of this report from the Commissioner of Internal Revenue or his designee on May 26, 2009. We received comments from the Commissioner on June 11, 2009. We have reprinted IRS’s written comments in appendix III. IRS continues to make progress addressing its significant financial management challenges. Over the years since we first began auditing IRS’s financial statements in fiscal year 1992, IRS has taken actions that enabled us to close over 200 of our financial management-related recommendations. This includes 35 recommendations we are closing based on actions IRS took during the period covered by our fiscal year 2008 financial audit. At the same time, however, our audits continue to identify additional internal control issues, resulting in further recommendations for corrective action, including 16 new financial management-related recommendations resulting from our fiscal year 2008 financial audit. These internal control issues, and the resulting recommendations, can be directly traced to the control activities in the internal control standards. As such, it is essential that they be fully addressed and resolved to strengthen IRS’s overall financial management to efficiently and effectively achieve its goals and mission. In July 2008, we issued a report on the status of IRS’s efforts to implement corrective actions to address financial management recommendations stemming from our fiscal year 2007 and prior year financial audits and other financial management-related work. In that report, we identified 81 audit recommendations that remained open and thus required corrective action by IRS. A significant number of these recommendations had been open for several years, either because IRS had not taken corrective action or because the actions taken had not yet effectively resolved the issues that gave rise to the recommendations. IRS continued to work to address many of the internal control issues to which these open recommendations relate. In the course of performing our fiscal year 2008 financial audit, we identified numerous actions IRS took to address many of its internal control issues. On the basis of IRS’s actions, which we were able to substantiate through our audit, we are able to close 35 of these prior years’ recommendations. IRS considers another 18 of the prior years’ recommendations to be effectively addressed. However, we still consider them to be open either because we have not yet been able to verify the effectiveness of IRS’s actions or because, in our view, the actions taken did not fully address the issue that gave rise to the recommendation. Forty-six recommendations from prior years remain open, a significant number of which have been outstanding for several years. During our audit of IRS’s fiscal year 2008 financial statements, we identified additional issues that require corrective action. In a recent management report to IRS, we discussed these issues, and made 16 new recommendations to address them. Consequently, 62 financial management-related recommendations need to be addressed. While most of these can be addressed in the short term, a few, particularly those concerning IRS’s automated systems, are complex and will require several more years to fully and effectively address. We consider 52 recommendations to be short-term and 10 to be long-term. In addition to the 62 open recommendations from our financial audits and other financial management-related work, there are 74 open recommendations stemming from our assessment of IRS’s information security controls over key financial systems, information, and interconnected networks. Those 74 primarily relate to lack of an agencywide information security program, which was a key reason for the material weakness in IRS’s information systems security controls over its financial and tax processing systems. Unresolved, previously reported recommendations and newly identified recommendations related to information security increase the risk of unauthorized disclosure, modification, or destruction of financial and sensitive taxpayer data. Recommendations resulting from the information security issues identified in our annual audits of IRS’s financial statements are reported separately because of the sensitive nature of these issues. Appendix I presents a list of (1) the 81 recommendations based on our financial statement audits and other financial management-related work that we had not previously reported as closed, (2) IRS-reported corrective actions taken or planned as of April 2009, and (3) our analysis of whether the issues that gave rise to the recommendations have been effectively addressed based primarily on the work performed during our fiscal year 2008 financial statement audit. Appendix I includes recommendations based on our fiscal year 2008 financial statement audit. The appendix lists the recommendations by the date on which the recommendation was made and by report number. Appendix II presents the open recommendations arranged by related material weakness, significant deficiency, compliance issue, or other control issue as described in our opinion report on IRS’s financial statements. Linking the open recommendations from our financial audits and other financial management-related work, and the issues that gave rise to them, to internal control activities that are central to IRS’s tax administration responsibilities provides insight regarding their significance. The internal control standards define 11 control activities grouped into three broad categories as shown in table 1. The open recommendations from our financial audits and financial management-related work, and the underlying issues that gave rise to them, can be traced to one of the control activities. As table 1 indicates, 20 recommendations (32 percent) relate to issues associated with IRS’s lack of effective controls over safeguarding of assets and security activities. Another 24 recommendations (39 percent) relate to issues associated with IRS’s inability to properly record and document transactions. The remaining 18 open recommendations (29 percent) relate to issues associated with the lack of effective management review and oversight. On the following pages, we group the 62 open recommendations under the control activity to which the condition that gave rise to them most appropriately fits. We first define each control activity as presented in the internal control standards and briefly identify some of the key IRS operations that fall under that control activity. Although not comprehensive, the descriptions are intended to help explain why actions to strengthen these control activities are important for IRS to efficiently and effectively carry out its overall mission. For each recommendation, we also indicate whether it is a short-term or long-term recommendation. For those characterized as short-term, we believe that IRS has the capability to implement solutions within 2 years. Given IRS’s mission, the sensitivity of the data it maintains, and its processing of trillions of dollars of tax receipts each year, one of the most important control activities at IRS is the safeguarding of assets. Internal control in this important area should be designed to provide reasonable assurance regarding prevention or prompt detection of unauthorized acquisition, use, or disposition of an agency’s assets. We have grouped together the four control activities in the internal control standards that relate to safeguarding of assets (including tax receipts) and security activities (such as limiting access to only authorized personnel): (1) physical control over vulnerable assets, (2) segregation of duties, (3) controls over information processing, and (4) access restrictions to and accountability for resources and records. Internal control standard: An agency must establish physical control to secure and safeguard vulnerable assets. Examples include security for and limited access to assets such as cash, securities, inventories, and equipment which might be vulnerable to risk of loss or unauthorized use. Such assets should be periodically counted and compared to control records. IRS collects trillions of dollars in taxes each year, a significant amount of which is collected in the form of checks and cash accompanied by tax returns and related information. IRS collects taxes both at its own facilities as well as at lockbox banks that operate under contract with the Department of the Treasury’s (Treasury) Financial Management Service. IRS acts as custodian for (1) the tax payments it receives until they are deposited in the General Fund of the U.S. Treasury and (2) the tax returns and related information it receives until they are either sent to the Federal Records Center or destroyed. IRS is also charged with controlling many other assets, such as computers and other equipment, but IRS’s legal responsibility to safeguard tax returns and the confidential information taxpayers provide on tax returns makes the effectiveness of its internal controls with respect to physical security essential. While effective physical safeguards over receipts should exist throughout the year, such safeguards are especially important during the peak tax filing season. Each year during the weeks preceding and shortly after April 15, an IRS service center campus (SCC) or lockbox bank may receive and process daily over 100,000 pieces of mail containing returns, receipts, or both. The dollar value of receipts each SCC and lockbox bank processes increases to hundreds of millions of dollars a day during the April 15 time frame. The following 11 recommendations are designed to improve IRS’s physical controls over vulnerable assets. We consider all of them to be correctable on a short-term basis. (See table 2.) Internal control standard: Key duties and responsibilities need to be divided or segregated among different people to reduce the risk of error or fraud. This should include separating the responsibilities for authorizing transactions, processing and recording them, reviewing the transactions, and handling any related assets. No one individual should control all key aspects of a transaction or event. IRS employees process trillions of dollars of tax receipts each year, of which hundreds of billions are received in the form of cash or checks, and for processing hundreds of billions of dollars in refunds to taxpayers. Consequently, it is critical that IRS maintain appropriate separation of duties to allow for adequate oversight of staff and protection of these vulnerable resources so that no single individual would be in a position of causing an error or irregularity, potentially converting the asset to personal use, and then concealing it. For example, when an IRS field office or lockbox bank receives taxpayer receipts and returns, it is responsible for depositing the cash and checks in a depository institution and forwarding the related information received to an SCC for further processing. In order to adequately safeguard receipts from theft, the person responsible for recording the information from the taxpayer receipts on a voucher should be different from the individual who prepares those receipts for transmittal to the SCC for further processing. Also, for procurement of goods and services, the person who places an order for goods and services should be different from the person who receives the goods and services. Such separation of duties will help to prevent the occurrence of fraud, theft of IRS assets, or both. Implementing the following three recommendations would help IRS improve its separation of duties, which will in turn strengthen its controls over tax receipts and refunds and procurement activities. All are short- term in nature. (See table 3.) Internal control standard: A variety of control activities are used in information processing. Examples include edit checks of data entered, accounting for transactions in numerical sequences, and comparing file totals with control totals. There are two broad groupings of information systems control—general control (for hardware such as mainframe, network, end-user environments) and application control (processing of data within the application software). General controls include entitywide security program planning, management, and backup recovery procedures and contingency and disaster planning. Application controls are designed to help ensure completeness, accuracy, authorization, and validity of all transactions during application processing. IRS relies extensively on computerized systems to support its financial and mission-related operations. To efficiently fulfill its tax processing responsibilities, IRS relies extensively on interconnected networks of computer systems to perform various functions, such as collecting and storing taxpayer data, processing tax returns, calculating interest and penalties, generating refunds, and providing customer service. As part of our annual audits of IRS’s financial statements, we assess the effectiveness of IRS’s information security controls over key financial systems, data, and interconnected networks at IRS’s critical data processing facilities that support the processing, storage, and transmission of sensitive financial and taxpayer data. From that effort over the years, we have identified information security control weaknesses that impair IRS’s ability to ensure the confidentiality, integrity, and availability of its sensitive financial and taxpayer data. As of January 2009, there were 74 open recommendations from our information security work designed to improve IRS’s information security controls. As discussed previously, recommendations resulting from our information security work are reported separately and are not included in this report primarily because of the sensitive nature of these issues. However, the following short-term recommendation is related to systems limitations and IRS’s need to enhance its computer programs. (See table 4.) Internal control standard: Access to resources and records should be limited to authorized individuals, and accountability for their custody and use should be assigned and maintained. Periodic comparison of resources with the recorded accountability should be made to help reduce the risk of errors, fraud, misuse, or unauthorized alteration. Because IRS deals with a large volume of cash and checks, it is imperative that it maintain strong controls to appropriately restrict access to those assets, the records that track those assets, and sensitive taxpayer information. Although IRS has a number of both physical and information systems controls in place, some of the issues we have identified in our financial audits over the years pertain to ensuring that those individuals who have direct access to these cash and checks are appropriately vetted before being granted access to taxpayer receipts and information and to ensuring that IRS maintains effective access security control. The following five short-term recommendations were intended to help IRS improve its access restrictions to assets and records. (See table 5.) IRS has a number of internal control issues that relate to recording transactions, documenting events, and tracking the processing of taxpayer receipts or information. We have grouped three control activities together that relate to proper recording and documenting of transactions: (1) appropriate documentation of transactions and internal controls, (2) accurate and timely recording of transactions and events, and (3) proper execution of transactions and events. Internal control standard: Internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. The documentation should appear in management directives, administrative policies, or operating manuals and may be in paper or electronic form. All documentation and records should be properly managed and maintained. IRS collects and processes trillions of dollars in taxpayer receipts annually both at its own facilities and at lockbox banks under contract to process taxpayer receipts for the federal government. Therefore, it is important that IRS maintain effective controls to ensure that all documents and records are properly and timely recorded, managed, and maintained both at its facilities and at the lockbox banks. IRS must adequately document and disseminate its procedures to ensure that they are available for IRS employees. IRS must also document its management reviews of controls, such as those regarding refunds and returned checks, credit card purchases, and reviews of taxpayer assistance centers (TAC). Finally, to ensure future availability of adequate documentation, IRS must ensure that its systems, particularly those now being developed and implemented, have appropriate capability to trace transactions. Resolving the following nine recommendations would assist IRS in improving its documentation of transactions and internal control procedures. Eight of these recommendations are short-term, and one is long-term. (See table 6.) Internal control standard: Transactions should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. This applies to the entire process or life cycle of a transaction or event from the initiation and authorization through its final classification in summary records. In addition, control activities help to ensure that all transactions are completely and accurately recorded. IRS maintains taxpayer records for tens of millions of taxpayers in addition to maintaining its own financial records. To carry out this responsibility, IRS often has to rely on outdated computer systems or manual work-arounds. Unfortunately, some of IRS’s recordkeeping difficulties we have reported on over the years will not be addressed until it can replace its aging systems, an effort that is long-term and partly depends on future funding. Implementation of the following 12 recommendations would strengthen IRS’s recordkeeping abilities. (See table 7.) Seven of these recommendations are short-term, and 5 are long-term regarding requirements for new systems for maintaining taxpayer records. Several of the recommendations listed deal with financial reporting processes, such as maintaining subsidiary records, recording budgetary transactions, and tracking program costs. Some of the issues that gave rise to several of our recommendations directly affect taxpayers, such as those involving duplicate assessments, errors in calculating and reporting manual interest, errors in calculating penalties, and recovery of trust fund penalty assessments. Seven of these recommendations have remained open at least 5 years and one over 10 years, reflecting the complex nature of the underlying systems issues that must be resolved to fully address some of these issues. Internal control standard: Transactions and other significant events should be authorized and executed only by persons acting within the scope of their authority. This is the principal means of ensuring that only valid transactions to exchange, transfer, use, or commit resources and other events are initiated or entered into. Authorizations should be clearly communicated to managers and employees. Each year, IRS pays out hundreds of billions of dollars in tax refunds, some of which are distributed to taxpayers manually. IRS requires that all manual refunds be approved by designated officials. However, weaknesses in controls for authorizing such refunds expose the federal government to losses because of the issuance of improper refunds. Likewise, the failure to ensure that employees obtain appropriate authorizations to use purchase cards or initiate travel similarly leave the government open to fraud, waste, or abuse. Dealing with the following three short-term recommendations would improve IRS’s controls over its manual refund, travel, and purchase card transactions. (See table 8.) All personnel within IRS have an important role in establishing and maintaining effective internal controls, but IRS’s managers have additional review and oversight responsibilities. Management must set the objectives, put control activities in place, and monitor and evaluate controls to ensure that they are followed. Without adequate monitoring by managers, there is a risk that internal control activities may not be carried out effectively and in a timely manner. We have grouped three control activities related to effective management review and oversight: (1) reviews by management at the functional or activity level, (2) establishment and review of performance measures and indicators, and (3) management of human capital. Although we also include the control activity “top-level reviews of actual performance” in this grouping, we do not have any open recommendations to IRS related to this internal control activity. Internal control standard: Managers need to compare actual performance to planned or expected results throughout the organization and analyze significant differences. IRS employs over 100,000 full-time and seasonal employees. In addition, as discussed earlier, Treasury’s Financial Management Service contracts with banks to process tens of thousands of individual receipts, totaling hundreds of billions of dollars. Management oversight of operations is important at any organization, but is imperative at IRS given its mission. Implementing the following 11 short-term and 2 long-term recommendations would improve IRS’s management oversight of courier services, contractor facilities, penalty calculations, timely release of liens, issuance of manual refunds, and use of appropriated funds. (See table 9.) These recommendations were made because an internal control activity either did not exist or the existing control was not being adequately or consistently applied. Internal control standard: Activities need to be established to monitor performance measures and indicators. These controls could call for comparisons and assessments relating different sets of data to one another so that analyses of the relationships can be made and appropriate actions taken. Controls should also be aimed at validating the propriety and integrity of both organizational and individual performance measures and indicators. IRS’s operations include a vast array of activities encompassing educating taxpayers, processing of taxpayer receipts and data, disbursing hundreds of billions of dollars in refunds to millions of taxpayers, maintaining extensive information on tens of millions of taxpayers, and seeking collection from individuals and businesses that fail to comply with the nation’s tax laws. Within its compliance function, IRS has numerous activities, including identifying businesses and individuals that underreport income, collecting from taxpayers who do not pay taxes, and collecting from those receiving refunds for which they are not eligible. Although IRS has at its peak over 100,000 employees, it still faces resource constraints in attempting to fulfill its duties. It is vitally important for IRS to have sound performance measures to assist it in assessing its performance and targeting its resources to maximize the government’s return on investment. However, in past audits we have reported that IRS did not capture costs at the program or activity level to assist in developing cost-based performance measures for its various programs and activities. As a result, IRS is unable to measure the costs and benefits of its various collection and enforcement efforts to best target its available resources. The following short-term and two long-term recommendations are designed to assist IRS in (1) evaluating its operations, (2) determining which activities are the most beneficial, and (3) establishing a good system for oversight. (See table 10.) These recommendations call for IRS to measure, track, and evaluate the costs, benefits, or outcomes of its operations—particularly with regard to identifying its most cost-effective tax collection activities. Internal control standard: Effective management of an organization’s workforce—its human capital—is essential to achieving results and an important part of internal control. Management should view human capital as an asset rather than a cost. Only when the right personnel for the job are on board and are provided the right training, tools, structure, incentives, and responsibilities is operational success possible. Management should ensure that skill needs are continually assessed and that the organization is able to obtain a workforce that has the required skills that match those necessary to achieve organizational goals. Training should be aimed at developing and retaining employee skill levels to meet changing organizational needs. Qualified and continuous supervision should be provided to ensure that internal control objectives are achieved. Performance evaluation and feedback, supplemented by an effective reward system, should be designed to help employees understand the connection between their performance and the organization’s success. As a part of its human capital planning, management should also consider how best to retain valuable employees, plan for their eventual succession, and ensure continuity of needed skills and abilities. IRS’s operations cover a wide range of technical competencies with specific expertise needed in tax-related matters; financial management; and systems design, development, and maintenance. Because IRS has tens of thousands of employees spread throughout the country, it is imperative that management keeps its guidance up-to-date and its staff properly trained. Putting the following two short-term recommendations into effect would assist IRS in its management of human capital. (See table 11.) For several years, we have reported material weaknesses, significant deficiencies, noncompliance with laws and regulations, and other control issues in our annual financial statement audits and related management reports. To assist IRS in addressing those control issues, appendix II provides summary information regarding the primary issue to which each open recommendation is related. To compile this summary, we analyzed the nature of the open recommendations to relate them to the material weaknesses, significant deficiency, compliance issue, and other control issues not associated with a material weakness or significant deficiency identified as part of our financial statement audit. Increased budgetary pressures and an increased public awareness of the importance of internal control require IRS to carry out its mission more efficiently and more effectively while protecting taxpayers’ information. Sound financial management and effective internal controls are essential if IRS is to efficiently and effectively achieve its goals. IRS has made substantial progress in improving its financial management since its first financial audit, as evidenced by unqualified audit opinions on its financial statements for the past 9 years, resolution of several material internal control weaknesses and significant deficiencies, and actions taken resulting in the closure of hundreds of financial management recommendations. This progress has been the result of hard work by many individuals throughout IRS and sustained commitment of IRS leadership. Nonetheless, more needs to be done to fully address the agency’s continuing financial management challenges. Further efforts are needed to address the internal control deficiencies that continue to exist. Effective implementation of the recommendations we have made and continue to make through our financial audits and related work could greatly assist IRS in improving its internal controls and achieving sound financial management. While we recognize that some actions—primarily those related to modernizing automated systems—will take a number of years to resolve, most of the open recommendations can be addressed in the short term. In commenting on a draft of this report, IRS expressed its appreciation for our acknowledgment of the agency’s progress in addressing its financial management changes as evidenced by our closure of 35 open financial management recommendations from prior GAO reports. IRS also commented that it is committed to implementing appropriate improvements to ensure that it maintains sound financial management practices. We will review the effectiveness of further corrective actions IRS has taken or will take to address all open recommendations as part of our audit of IRS’s fiscal year 2009 financial statements. We are sending copies of this report to the Chairmen and Ranking Members of the Senate Committee on Appropriations; Senate Committee on Finance; Senate Committee on Homeland Security and Governmental Affairs; and Subcommittee on Taxation, IRS Oversight and Long-Term Growth, Senate Committee on Finance. We are also sending copies to the Chairmen and Ranking Members of the House Committee on Appropriations; House Committee on Ways and Means; the Chairman and Vice Chairman of the Joint Committee on Taxation; the Secretary of the Treasury; the Director of OMB; the Chairman of the IRS Oversight Board; and other interested parties. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-3406 or sebastians@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This appendix presents a list of (1) the 81 recommendations that we had not previously reported as closed, (2) Internal Revenue Service (IRS) reported corrective actions taken or planned as of April 2009, and (3) our analysis of whether the issues that gave rise to the recommendations have been effectively addressed. It also includes recommendations based on our fiscal year 2008 financial statement audit. The appendix lists the recommendations by the date on which the recommendation was made and by report number. Financial Management: Important IRS Revenue Information Is Unavailable or Unreliable (GAO/AIMD-94-22, Dec. 21, 1993) Open. The Deputy Commissioner, Services and Enforcement issued a memorandum in July 2008 emphasizing the need to use training modules and on-site assistance from the Servicewide Interest Program to ensure accurate calculations. Interest- related training was provided to personnel by January 2009, and additional guidance will be issued to Collection field personnel. SB/SE updated Internal Revenue Manual provisions and made upgrades to the commercial software program utilized to compute manual interest. SB/SE is developing a random sampling process to be completed by October 2009 to measure the accuracy of interest computations. Open. During our fiscal year 2006 audit, we tested a statistical sample of manual interest transactions and estimated that 18 percent of IRS’s manual interest population contains errors. We concluded that IRS controls over this area was still ineffective. The ineffectiveness of these controls contributes to errors in taxpayer records, which is a major component of the material weakness in IRS’s management of unpaid assessments. While IRS has undertaken several actions to strengthen controls over this area, such as updating guidance and providing training related to manual interest calculations, it has yet to develop a sampling methodology to monitor the accuracy of its manual interest computation and assess the effectiveness of its corrective actions. Consequently, we did not test IRS controls in this area as part of our fiscal year 2008 audit, as both we and IRS believed that the actions taken by IRS thus far would not improve the accuracy of the manual interest calculations. We will continue to monitor IRS’s actions to address this recommendation during future audits. Internal Revenue Service: Immediate and Long-Term Actions Needed to Improve Financial Management (GAO/AIMD-99-16, Oct. 30, 1998) Open. Small Business/Self- Employed (SB/SE) continues to request programming changes to increase Automated Trust Fund Recovery systemic processing to reduce the number of accounts requiring manual intervention. IRS reviews Trust Fund Recovery Penalty (TFRP) transactions to ensure accurate and timely recording, including Performance Assurance System reviews by a daily random selection of closed cases, management reviews of a random selection of both closed and open casework, and Headquarters Operational Reviews. In addition to the above reviews, Campus Compliance Services is exploring the development and implementation of a statistically valid sampling plan to monitor the accuracy and timeliness of the cross-referencing of payments and credits to TFRP accounts. The frequency and process for performing these internal reviews will be considered during development. Open. IRS has made significant progress in this area over the past several years. For example, IRS established procedures to more clearly link each penalty assessment against a responsible corporate officer to a specific tax period of the business account and began phasing in the use of the Automated Trust Fund Recovery system intended to properly cross- reference payments received. IRS also enhanced the Automated Trust Fund Recovery system in fiscal year 2008 to begin automatically reducing the amounts owed on all related accounts when a payment is received from one related party. However, the system is currently unable to process all payments related to such cases. Consequently, IRS must continue to manually reduce the account balance on related accounts for some payments. Thus, the opportunity for errors and omissions continues to exist. Our most recent test indicates that IRS’s controls in this area are still not effective in ensuring that all TFRP payments are correctly credited to all related parties in a timely manner. We will continue to monitor IRS’s actions to address this recommendation during future audits. Internal Revenue Service: Immediate and Long-Term Actions Needed to Improve Financial Management (GAO/AIMD-99-16, Oct. 30, 1998) Open. IRS is developing the Custodial Detailed Data Base (CDDB), which it believes will ultimately address many of the outstanding financial management recommendations. IRS implemented the first phase of the CDDB during fiscal year 2006. In fiscal year 2008, IRS enhanced CDDB to record unpaid assessments, including accrued penalties and interest in the general ledger by the various financial reporting categories. The Chief Financial Officer’s (CFO) office continues to ensure the accuracy of the TFRP cross- referencing using weekly CDDB reports. The CFO provides SB/SE with identified errors so SB/SE can correct the taxpayers account and CDDB can correctly classify the transactions. CDDB is now classifying approximately 80 percent of the TFRP inventory where TFRP assessments are appropriately tracked for all taxpayers liable but counted only once for reporting purposes. Open. During fiscal year 2008, IRS enhanced CDDB to begin regularly recording unpaid assessments, including accrued penalties and interest, from its master files to its general ledger by the various financial reporting categories (taxes receivable, compliance assessments, and write- offs). These enhancements established CDDB’s capability to function as a subsidiary ledger for unpaid tax debt. However, due to inherent limitations in CDDB programs for classifying unpaid assessments into the correct financial reporting categories and inaccuracies in taxpayer records, IRS is still unable to use CDDB as its subsidiary ledger for external reporting of its unpaid assessments, and must continue to use a labor-intensive, manual compensating process to estimate the year-end balances of the various categories of unpaid tax assessments to avoid material misstatements to its financial statements. Specifically, IRS had to make over $28 billion in adjustments to the fiscal year-end 2008 gross taxes receivable balance produced by CDDB as part of its manual estimation process for financial reporting. Full operational capability of CDDB depends on the successful implementation of future system releases planned through 2009 and the ability of these releases to address current limitations in accurately classifying all of IRS’s unpaid assessments. The lack of a fully functioning subsidiary ledger capable of producing accurate, useful, and timely information with which to manage and report externally is a major component of the material weakness in IRS’s management of unpaid assessments. We will continue to monitor IRS’s development of CDDB during our fiscal year 2009 and future audits. Internal Revenue Service: Custodial Financial Management Weaknesses (GAO/AIMD-99-193, Aug. 4, 1999) Open. SB/SE completed the Control Point Monitor (CPM) pilot in May 2008 and prepared a CPM manual. The CPM serves as a conduit from the Area Office to the Campus for assessment. The CPM manual establishes specific timeframes in which the CPM must process/complete required TFRP actions. Implementation of the manual is currently being negotiated with the National Treasury Employees Union to address impact and implementation issues resulting from the changes to the CPM process. SB/SE has created a suite of managerial reports to provide oversight of the TFRP process. SB/SE continues to submit Work Requests and Information Technology Assets Management System tickets to enhance the assessment process to provide greater efficiencies in the processing and posting of TFRP assessments. Open. During our fiscal year 2008 audit, we continued to identify long delays in processing and posting TFRP assessments. Although IRS has developed a draft of the CPM manual to provide better guidance for the timely processing of TFRP assessments, the manual is currently undergoing internal reviews and awaiting final approval for official use. We will continue to monitor IRS’s actions to address this recommendation during our fiscal year 2009 audit. Internal Revenue Service: Custodial Financial Management Weaknesses (GAO/AIMD-99-193, Aug. 4, 1999) Closed. All IRS field offices continue to provide training and to perform reviews to strengthen controls over remittances. SB/SE conducts reviews with each territory manager. Headquarters staff ensures Territory managers are enforcing the requirement for group managers to randomly sample remittance packages for review. Each area director receives a report with any findings and recommendations for implementation. All Tax Exempt and Government Entities (TE/GE) Division Directors continue to perform operational reviews to ensure their subordinate groups are properly processing all checks. TE/GE provides training and notices on these procedures. During fiscal year 2008, all managers certified in their 2008 Annual Assurance Review that vulnerable assets, such as cash, securities, and equipment, are physically secured and access to them is controlled. TE/GE will also implement by September 2009 requirements to verify that control procedures are in place during operational reviews, and include information on proper check handling procedures during training for new hires and Revenue Agents. Large and Mid-sized Business (LMSB) has incorporated instructions on the use of the U.S. Treasury Stamp in training given to new hires as part of their on the job training and periodically in group meetings. The use of the U.S. Treasury Stamp has also been incorporated into the Internal Revenue Manual (IRM) and is part of IRS’s standard operating procedure used for processing payments. Open. The objective of this recommendation was to create a mechanism for IRS to monitor the status of pervasive weaknesses in controls over taxpayer receipts and information that we have found at IRS’s field offices over the years. The purpose of this monitoring is to facilitate the timely detection and effective resolution of issues and to verify the effectiveness of new and existing policies and procedures on an ongoing basis. During our fiscal year 2008 audit, we identified instances at (1) four SB/SE units where there was no segregation of duties between preparation of the payment posting vouchers and subsequent preparation of the related document transmittals and transmittal package; (2) four SB/SE units where a document transmittal form was not prepared when transmitting multiple Daily Report of Collection Activity forms to the Submission Processing (SP) Center; (3) three SB/SE units where there was no system in place to monitor acknowledged/ unacknowledged transmittals to the submission processing center; (4) five SB/SE units where there was no evidence of managerial review of document transmittals; and (5) all 10 field offices where there were no procedures in place to verify that names on the duress alarm contact list were current and that appropriate first responders were contacted in the event of an emergency. Had IRS periodically reviewed the effectiveness of these controls in field offices as we recommended, these issues might have been detected and corrected. We will continue to assess IRS’s actions during our fiscal year 2009 audit. Internal Revenue Service: Custodial Financial Management Weaknesses (GAO/AIMD-99-193, Aug. 4, 1999) Closed. IRS augmented its Modernization & Information Technology Services staff, and cross-trained employees to increase the appropriate depth of experience to perform the master file extractions and other ad hoc procedures for financial reporting purposes. Modernization & Information Technology Services reduced the Assembler Language Code programmer shortages and increased contractor support by 17 percent. IRS also continues to expand the use of CDDB during the annual audit, and the addition of trained Modernization & Information Technology Services and contractor staff ensures development of reliable balances for financial reporting purposes on a continuing basis. Closed. IRS hired additional staff in the Custodial Accounting Branch, which has responsibility for the custodial financial statements. Also, employees were cross-trained and current systems expanded to better support the financial reporting of revenue, refunds, and unpaid assessments. In addition, IRS reduced its shortage of assembly language programmers by holding training classes for employees. Internal Revenue Service: Serious Weaknesses Impact Ability to Report on and Manage Operations (GAO/AIMD-99-196, Aug. 9, 1999) Closed. IRS developed a cost accounting policy that provides guidance on managerial cost concepts for the agency, established an Office of Cost Accounting within the CFO, and completed several cost pilot projects to demonstrate the viability of its full cost methodology at the program level. Performance measures were enhanced, and the return on investment for the Earned Income Tax Credit program was completed with full cost information. As demonstrated by the cost pilots, IRS has the capability to use the cost data within the Integrated Financial System (IFS) and the associated workload and production data from IFS and its business unit systems to calculate the full costs of its products, services, and programs. The IFS contains 4 years of fully allocated cost data. Closed. IRS has taken several actions to address this recommendation and improve its cost accounting capability. For example, in fiscal year 2007, IRS developed and issued its first cost accounting policy to provide guidance on the concepts and requirements for managerial cost accounting within IRS. In addition, in fiscal year 2008, IRS (1) established an Office of Cost Accounting within its CFO, (2) completed several cost pilots to demonstrate its capability to use the cost data within IFS and the associated workload and production data from its business unit systems to calculate the full costs of its products, services, and programs, and (3) completed development of the return on investment for the Earned Income Tax Credit program that includes full cost information. However, IRS has not extended the cost pilot methodology to develop full cost information on the full range of IRS’s programs. Nevertheless, in order to provide recommendations more closely aligned with the current status, we have agreed with IRS to close this recommendation based on IRS’s progress to date and have reported the remaining issues, along with related recommendations for corrective action, in our June 2009 management report. See GAO-09-513R and recommendations 09-14 and 09-15 in this report. Internal Revenue Service: Serious Weaknesses Impact Ability to Report on and Manage Operations (GAO/AIMD-99-196, Aug. 9, 1999) Open. IRS has established strong internal controls and procedures to enhance its ability to account for property and equipment in IFS. IRS is looking at enhancing its asset- tracking system to more closely reconcile physical asset records to the financial records. This would enable targeted reconciliations to occur. Open. Our fiscal year 2008 property and equipment valuation testing revealed problems with the linking of the purchase of assets recorded in the general ledger system to the P&E inventory system, which indicates that IRS’s detailed P&E records do not fully reconcile to the financial records. We will continue to monitor IRS’s strategy in addressing these financial management systems issues. Internal Revenue Service: Recommendations to Improve Financial and Operational Management (GAO-01-42, Nov. 17, 2000) Closed. IRS is using a workload delivery model in the development and monitoring of an Enterprise Collection Plan that aligns performance measures across all collection organizations to match results against the corporate measures. Results of the model are used to project inventory receipt patterns by function and category of work, allowing for improved management of corporate collection inventory and resource allocation. New models were implemented in the Inventory Delivery System on January 12, 2009. The use of a rules engine has also been incorporated in the Inventory Delivery System to systemically make changes to case routing based on modeling predictions and rules. Collection Case Selection continues to provide ad hoc case assignments for testing case routing. Cases are selected based on a set of criteria and routed to different treatments to determine where like cases should be routed in the future. The CFO also included return on investment calculations for its collection initiatives in the 2007, 2008, and 2009 Budget Submissions. Closed. IRS has taken significant steps to address this recommendation. IRS built sophisticated computer modeling and risk assessment techniques with increased predictive power to improve IRS’s ability to route unpaid tax cases to the appropriate enforcement resource. IRS estimated that those changes have resulted in several billion dollars in additional tax collections. IRS has also established governance councils for IRS’s examination and collection activities. Finally, IRS has completed several actions to improve its ability to develop full cost information for its enforcement programs. Although IRS’s actions taken to date are important, they have not fully addressed the objectives of our recommendation, such as completing the development of full cost methodologies for IRS’s programs and activities. In order to provide recommendations more closely aligned with the current status, we have agreed with IRS to close this recommendation based on IRS’s progress to date and have reported the remaining issues, along with related recommendations for corrective action, in our June 2009 management report. See GAO-09-513R and recommendations 09-14, 09-15, and 09-16 in this report. Internal Revenue Service: Recommendations to Improve Financial and Operational Management (GAO-01-42, Nov. 17, 2000) Open. IRS continues to address issues that cause late lien releases through an internal Lien Release Action Plan and by conducting reviews as a part of its A-123 controls assessment process. Based on the annual sample of lien releases, the results of seven errors (liens released in an untimely manner) in 59 observations, yield a net most likely error of 12 percent, and (at greater than 95 percent confidence level), an upper error limit that could be as high as 21 percent. IRS added corrective actions to address issues found during the review. SB/SE is re-evaluating the fiscal years 2009 and 2010 overall lien release error rate goals and will submit changes to the Lien Release Action Plan. Open. IRS has taken a number of actions over the past several years to address this issue. However, during our fiscal year 2008 audit, we continued to find that IRS did not always release liens in a timely manner. In IRS’s own Office of Management and Budget (OMB) A- 123 testing of lien releases, it identified 7 instances out of 59 cases tested in which it did not release the applicable federal tax lien within the statutory 30- day period. The time between the satisfaction of the liability and release of the lien ranged from 33 days to more than 494 days. Based on these results, IRS estimated that for about 12 percent of unpaid tax assessment cases that were resolved in fiscal year 2008, in which it had filed a tax lien, it did not release the lien within 30 days of the resolution of the case. IRS is 95 percent confident that the percentage of cases in which the lien was not released within 30 days does not exceed 21 percent. IRS’s ineffective controls over this area results in its noncompliance with Internal Revenue Code Section 6325 which requires IRS to release its tax liens within 30 days of the date the related tax liability is fully satisfied. We will continue to monitor IRS’s actions to address this recommendation in future audits. Internal Revenue Service: Recommendations to Improve Financial and Operational Management (GAO-01-42, Nov. 17, 2000) Closed. IRS has taken steps to examine Earned Income Tax Credit claims, and to address the collection of Automated Underreporter and Combined Annual Wage Reporting as part of the workload delivery model. IRS updated the Earned Income Tax Credit error estimates and identified root causes of non- compliance. Additionally, in fiscal year 2008, IRS calculated a full- cost return on investment for Earned Income Tax Credit and completed an Automated Underreporter cost accounting pilot using IFS cost data. This pilot calculated the return on investment of Automated Underreporter case closures, which represented those cases that were closed after a notice was sent to the taxpayer. IRS established Exam and Collection governance bodies to improve collection efforts and implemented a modeling tool to better target collection efforts. Closed. IRS has taken significant steps to address this recommendation, including those listed in the “status per IRS” column. IRS’s cost pilot projects completed in fiscal year 2008, demonstrated IRS’s ability to determine the full cost of its programs. Although IRS’s actions taken to date are important, they have not fully addressed the objectives of our recommendation. For example, IRS’s cost pilot project methodology is time- consuming and requires intensive manual intervention, and IRS has not completed the task of developing methodologies for its programs and activities. In order to provide recommendations more closely aligned with the current status, we have agreed with IRS to close this recommendation based on IRS’s progress to date and have reported the remaining issues, along with related recommendations for corrective action, in our June 2009 management report. See GAO-09-513R and recommendations 09-14, 09-15, and 09-16 in this report. Internal Revenue Service: Recommendations to Improve Financial and Operational Management (GAO-01-42, Nov. 17, 2000) Open. IRS will continue to pursue alternative approaches to enhance its ability to account for leasehold improvements. Open. We will continue to monitor IRS’s development of alternative approaches to enhance its ability to account for P&E assets. Management Letter: Improvements Needed in IRS’s Accounting Procedures and Internal Controls (GAO-01-880R, July 30, 2001) Closed. The IRS is tracking and reporting the actual costs associated with reimbursable agreements through various business unit work load management tracking systems and IFS. The IRS Reimbursable Operating Guidelines established the procedures and processes for capturing direct and indirect costs associated with reimbursable agreements. Open. IRS has improved its methodology for allocating its costs of operations at the business unit level. However, further actions are needed for it to accumulate and report actual costs associated with specific reimbursable projects. We confirmed that IRS’s workload management tracking systems now capture details of time worked; however, these systems do not capture the full costs associated with specific reimbursable projects and do not interface with the general ledger (IFS) to capture all costs. We also noted that the fiscal year 2008 Reimbursable Operating Guidelines provide detail on determining the costs that should be included in the cost projection for a reimbursable agreement. However, the guidelines do not describe a process for determining the total actual costs incurred at the end of the agreement term, determining the difference between actuals and the original cost estimate, and refunding or billing for the difference. We will continue to monitor IRS’s efforts to fully implement its cost accounting system and, once it has been fully implemented, evaluate the effectiveness of IRS’s procedures for developing cost information for its reimbursable agreements. Internal Revenue Service: Progress Made, but Further Actions Needed to Improve Financial Management (GAO-02-35, Oct. 19, 2001) Closed. Employees itemize how their time is spent on specific projects/tasks in various workload management systems, and this information is utilized in the development of cost information which is used in resource allocation decisions. Closed. IRS has taken action to address our recommendation. We confirmed that IRS currently uses 24 separate functional tracking (workload management) systems for various categories of employees to itemize and track their time charges. Collectively, these systems now capture details of time worked by project for all employees. Internal Revenue Service: Progress Made, but Further Actions Needed to Improve Financial Management (GAO-02-35, Oct. 19, 2001) Closed. IFS allocates nonpersonnel costs to programs monthly and makes available cost data to managers, including the full cost of operating business units, and details on the allocated costs (i.e., building rent, depreciation, support costs, etc.). All business units can run cost reports as needed. Closed. IRS has taken actions to address this recommendation. We confirmed that IRS has improved its cost accounting capabilities by developing and implementing a methodology for allocating its costs of operations to its business units and to the cost categories on the Statement of Net Cost on a monthly basis. However, the cost categories on the Statement of Net Cost are at a higher level than specific programs and activities. Although IRS has developed full cost information on several IRS programs, IRS has not developed such information on the full range of IRS programs. However, in order to provide recommendations more closely aligned with the current status, we have agreed with IRS to close this recommendation based on IRS’s progress to date and have reported the remaining issues, along with related recommendations for corrective action, in our June 2009 management report. See GAO-09-513R and recommendations 09-14 and 09-15 in this report. Management Report: Improvements Needed in IRS’s Accounting Procedures and Internal Controls (GAO-02-746R, July 18, 2002) Closed. Wage and Investment (W&I) has taken a number of actions to address this recommendation. Field Assistance emphasizes the requirement for including a document transmittal form listing the Daily Report of Collection Activity forms in transmittal packages, and ensuring that they are reconciled and reviewed. Territory managers review and discuss monthly reports with the group manager. Results of the reviews are forwarded to the area director. Operational reviews at all levels are conducted annually to ensure that field offices comply with the requirement to prepare Form 3210, which lists all Forms 795 being shipped to the SP Center. W&I completed its annual Filing Season Readiness Workshop for all taxpayer assistance center (TAC) managers, which addressed remittance and data security. New managers will attend the “Managing a TAC” course during fiscal year 2009, which provides ongoing training on payment processing and managerial reviews. Operational reviews completed for fiscal year 2008 revealed that the TAC managers are validating employee profiles to ensure restricted command codes were used according to guidelines. Open. While IRS has cited that it is taking a number of actions to ensure existing receipt control policy requirements for segregation of duties are followed, one of the main mechanisms it uses to enforce this policy is training. IRS conducts an annual Filing Season Readiness Workshop for TAC managers and provides training for new TAC managers on collecting taxpayer receipts and conducting managerial reviews. During our review of the handouts provided for the annual readiness workshop we noted several sections that discussed IRS’s policies related to segregation of duties. In contrast, we found that the “Managing a TAC” course for new TAC managers did not specifically address those policies. From our discussions with IRS officials, the Filing Season Readiness Workshop is conducted annually during the first quarter of the fiscal year. Consequently, new TAC managers assigned after the first quarter of the fiscal year will not receive the same level of training regarding segregation of duties. In addition, during our recent visits to selected TACs in March 2009, we found instances where segregation of duties related to accepting and recording walk-in payments were not implemented. Management Report: Improvements Needed in IRS’s Accounting Procedures and Internal Controls (GAO-02-746R, July 18, 2002) Open. Agency-Wide Shared Services (AWSS) Personnel Security has taken several short and long term measures to reduce the instance of SETS errors. The short-term measures include (1) publishing instructions on the Personnel Security intranet site for SETS users to follow while reviewing bi-weekly SETS reports, (2) issuing bi-weekly emails to all SETS users with the most current reports to be used in identifying and reporting errors to NFC, and (3) compiling weekly extracts of all enter-on-duty dates where there were no fingerprint results or where the results were after the enter-on- duty date and sending those to each employment office for updates and feedback. The long- term measures included requesting revisions to SETS. Open. During our fiscal year 2008 audit, we continued to identify technical limitations and weaknesses with the SETS database. In addition, we found 248 instances where SETS was not updated in a timely manner or correctly for new-hire employees resulting in errors in the database. We will continue to assess IRS’s actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls and Accounting Procedures (GAO-04-553R, April 26, 2004) Closed. IRS performs monthly unannounced testing of guard response to alarms and test results are reviewed by the Security Programs Office to enforce and ensure compliance. Test results on guard response to alarms are consistently 98 percent or higher, indicating substantial compliance with IRS guidelines. Test procedures were formalized in IRM 10.2.14 Methods of Providing Protection, issued on October 1, 2008. In addition, the Guard Program Specialists from the Security Programs Office conduct unannounced alarm tests whenever they visit a site to do a Quality Assurance check of security posture and programs. Physical Security and Emergency Preparedness (PSEP) continues to utilize the Audit Management Checklist as a repeatable process where service center campuses (SCC) quarterly validate the performance and documentation of monthly unannounced alarm testing. Open. During our fiscal year 2008 audit, we identified instances at two of the three SCCs we visited in which security guards did not respond to alarms within the time limit outlined in the IRM. In addition, at another SCC we visited, we identified an instance in which security guards did not fully investigate the source of an alarm. We will continue to evaluate IRS’s enforcement of these policies and procedures during our fiscal year 2009 audit. Management Report: Review of Controls over Safeguarding Taxpayer Receipts and Information at the Brookhaven Service Center Campus (GAO-05-319R, Mar 10, 2005) Closed. W&I Accounts Management continues to enforce the restricted area access through periodic training. Candling procedures are reinforced through monthly internal control reviews of the process. In January 2008, Accounts Management increased management oversight of internal controls by implementing formal monthly internal control reviews at the former Submission Processing rampdown sites. A revised review template was developed to evaluate the quality of IRS’s internal control performance, identify potential deficiencies, and allow corrective actions to be taken immediately. The monthly results from each field director are forwarded to the Director, Accounts Management, and GAO. AWSS provides training when notified by W&I that a new monitor has been selected or when an existing monitor requires refresher training. Each campus badge office provides training to the restricted area door monitors as it pertains to the control, issuance, and inventory of the non-photo badges that are assigned at each site. Closed. Accounts Management implemented a monthly review to monitor internal controls over taxpayer receipts and information at campuses selected for reductions in their submission processing functions. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr 27, 2005) Closed. The Program, Planning, and Policy Office finalized and issued IRM 10.2.5 Identification Card on September 30, 2008. Section 10.2.5.6.2(2)a specifies that red photo ID cards may be issued to IRS contract employees who have a daily need on a continuing basis to be on site at a facility over a period of time, and who have been granted interim or final staff-like access to a facility/work area with sensitive systems or information. Before a red photo ID card may be issued, the contracting officer’s technical representative must provide the Physical Security Office with a copy of the Personnel Security & Investigation background investigation letter approving interim or final staff-like access. PSEP continues to utilize the Audit Management Checklist as a repeatable process where SCCs quarterly validate the filing of contractor background investigation documentation. Closed. We verified that IRS finalized and issued IRM 10.2.5 and continues to utilize the Audit Management Checklist to ensure that proper documentation is received and on file for contractors before they are granted staff-like access to service centers. During our fiscal year 2008 audit, we found no exceptions relating to SCCs granting contractors staff-like access before appropriate background investigations were completed. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr 27, 2005) Closed. The Program, Planning, and Policy Office finalized and issued IRM 10.2.5 Identification Card on September 30, 2008. IRM 10.2.5.6.2(2)a specifies that the Form 5519, 13716-A or similar identification request Form 13760, and the interim or final background investigation letter must be retained and filed in the identification media file for each contractor for the life of the identification card. PSEP continues to utilize the Audit Management Checklist as a repeatable process where SCCs quarterly validate the filing of contractor background investigation documentation. Closed. We verified that IRS finalized and issued IRM 10.2.5 and continues to utilize the Audit Management Checklist to ensure that proper documentation is received and on file for contractors before they are granted staff-like access to service centers. During our fiscal year 2008 audit, we found no exceptions. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr 27, 2005) Open. IRS revised IRM 5.1.2.4, Daily Report of Collection Activity- Form 795/795A, to establish segregation of duties procedures with respect to the preparation of Payment Posting Vouchers, Document Transmittal forms, and transmittal packages in the Collection Field function. Open. During our fiscal year 2008 audit, we identified instances at four SB/SE units we visited where duties involving the preparation of payment posting vouchers, document transmittal forms, and transmittal packages were not segregated. Employees informed us that they were unaware of a related requirement in the IRM. We will continue to assess IRS’s actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr 27, 2005) Closed. W&I Field Assistance continues to take actions to emphasize the requirement for including a document transmittal form listing the Daily Report of Collection Activity forms in transmittal packages. Operational reviews were conducted at all levels during fiscal years 2007 and 2008 to ensure that field offices comply with the requirement to prepare Form 3210, which lists all Forms 795 shipped to the SP Center. Further, IRM 1.4.11-11 was revised on October 7, 2008, to include the purpose, frequency, and documentation required for managerial reviews, which includes a review of Form 3210s, and trends and error reports. The outcome of the operational reviews revealed that managers are complying with the IRM procedures outlined for document transmittal. Open. During our fiscal year 2008 audit, we identified instances at four SB/SE units where a document transmittal form was not prepared when transmitting multiple Daily Report of Collection Activity forms to the SP Center. We will continue to evaluate this issue during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr 27, 2005) Closed. The IRS enforces documentation requirements relating to authorizing officials charged with approving manual refunds. IRS created a standard authorization memorandum in September 2008 for all offices to use. This will negate the disparity among the campuses in creating local authorization forms. IRS issued its annual solicitation memorandum for authorizing officials charged with approving manual refunds in August 2008 and received the annual list of authorized signatures by October 31, 2008, per IRM 3.17.79.3.5(4) (d). SP completed a sample review as part of the Monthly Security Review Checklist per IRM 3.17.79.3.5(3), and completed a 100 percent review of the new annual list by December 31, 2008. Open. During our fiscal year 2008 audit, we continued to find that the documentation requirements on memorandums, which are submitted to the manual refund units listing officials authorized to approve manual refunds, were not always complete. For example, some of the memorandums did not contain the signatures of the Heads of Office that delegated officials the authority to approve manual refunds while others did not contain the authorizing official’s campus or field office organization information as required by the IRM. We verified that IRS created a standard authorization memorandum in September 2008. However, IRS implemented this corrective action and completed its review of the new annual list subsequent to our fiscal year 2008 field work. We will evaluate IRS’s corrective actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr 27, 2005) Open. IRS continued to enforce the requirements for monitoring accounts and reviewing monitoring of accounts for manual refunds in fiscal year 2008. SB/SE Campus Compliance Services covered this topic in both Filing & Payment Compliance and Campus Reporting Compliance Operations during fiscal year 2008 reviews to ensure compliance with all IRM provisions for manual refunds. Submission Processing conducted refresher training at all sites by September 30, 2008, in team meetings and annual continuing professional education classroom training using IRM 21.4.4 and 3.17.79 as reference materials to reinforce the monitoring requirements. As a result of recent findings and quarterly review of the manual refund process in Accounts Management, both the monitoring and supervisory review process are being examined to identify means for improvement. Once the review is complete, consideration will be given to implementing any recommendations. Accounts Management continues its quarterly reviews of the manual refund process. Open. During our fiscal year 2008 audit, we found instances where the manual refund initiators did not monitor accounts to prevent duplicate refunds and supervisors did not review the monitoring of accounts. IRS’s review of the monitoring and supervisory review process for manual refunds has not been completed. We will continue to evaluate IRS’s corrective actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-05-247R, Apr 27, 2005) Open. IRS continued to enforce the requirements for documenting monitoring actions and supervisory review for manual refunds in fiscal year 2008. SB/SE Campus Compliance Services covered this topic in both Filing & Payment Compliance and Campus Reporting Compliance Operations during their fiscal year 2008 campus reviews to ensure all campuses continue to comply with all IRM provisions for manual refunds. Submission Processing conducted refresher training at all sites by September 30, 2008, in team meetings and annual continuing professional education classroom training using IRM 21.4.4 and 3.17.79 as reference materials to reinforce the monitoring requirements. As a result of recent findings and quarterly review of the manual refund process in Accounts Management, both the monitoring and supervisory review process are being examined to identify means for improvement. Once the review is complete, consideration will be given to implementing any recommendations. Accounts Management continues its quarterly reviews of the manual refund process. Open. During our fiscal year 2008 audit, we continued to find instances where the manual refund initiators did not document their monitoring of accounts to prevent duplicate refunds. IRS’s review of the monitoring and supervisory review process for manual refunds has not been completed. We will continue to evaluate IRS’s corrective actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. Accounts Management has procedures in place for the periodic supervisory review and documentation of the Form 3210 reconciliation process, which is designed to follow up on unacknowledged forms. This process is designed to provide a timely account of any discrepancy between the documents listed on the Form 3210 and those received. For the last 3 years, conference calls have been conducted with each directorate to reinforce the correct processing of Form 3210s. Recent actions to address the recommendation include having “Form 3210 Processing” as an agenda item on the Refund Inquiry Units’ conference call. In addition, the quarterly Accounts Management internal control Form 3210 review now requires that the Refund Inquiry Unit be included in the review. Open. During our fiscal year 2008 audit, we identified an instance at one SCC where the Refund Inquiry Unit manager did not perform or document periodic reviews of forms used to transmit returned refund checks. We will continue to evaluate IRS’s actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Open. IRS has procedures in place to ensure compliance with tracking acknowledgement copies of document transmittals. W&I Account Management continues to analyze the results of its quarterly reviews. Field Assistance revised the IRM provisions during 2007 to provide procedures for requiring TACs to follow up with SP Centers when acknowledgments are not received within 10 days. Field Assistance revised other IRM provisions to include more detail for processing Form 3210. The IRM provides guidance to maintain centralized files for acknowledged Form 3210 for three years, and provides guidance for handling unacknowledged Form 3210. Offices transmitting receipts have a system to track acknowledged copies of document transmittals. All TE/GE Division Directors continue to use the Quick Reference Guide for Processing Checks, including a check sheet and flowchart developed for the TE/GE Exam Managers to use when performing operational reviews to ensure their subordinate groups are properly processing all checks. TE/GE will also implement by September 2009 requirements for each Examination Area Manager to verify tracking measures are in place in all their groups. LMSB has completed all its planned actions with regard to this recommendation and will continue to issue an annual executive memorandum on Form 3210 procedures around July 2009. Open. During our fiscal year 2008 audit, we identified instances at three SB/SE units and two TACs where there was no system in place to monitor acknowledged/ unacknowledged transmittals to the SP Center. We will continue to assess IRS’s actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. IRS revised the IRM on October 1, 2008, to include more detail for processing Form 3210. IRM 1.4.11.19.1 provides guidance to maintain centralized files for acknowledged Form 3210 for 3 years. Operational Reviews revealed that managers are in compliance with conducting and documenting the document transmittal review that includes the reconciliation process of Forms 3210 and 795. All managers were reminded to conduct these reviews at the Filing Season Readiness Workshop completed by December 15, 2008. The Refund Inquiry Unit continues to be included in the Accounts Management quarterly internal control review of document transmittal procedures. The review checklist includes the timely follow- up and documentation of Form 3210 acknowledgements as well as the required periodic managerial review. For TE/GE, each front line Examination group manager will ensure they complete reviews of document transmittals, and TE/GE is adding an additional question to TE/GE’s 2009 Annual Assurance Review to certify all managers addressed this issue by June 2009. Open. During our fiscal year 2008 audit, we identified instances at five SB/SE units and eight TACs where there was no evidence of managerial review of document transmittals and one instance at a SCC where the Refund Inquiry Unit manager did not perform or document periodic reviews of forms used to transmit returned refund checks. Moreover, the corrective actions cited by IRS were implemented after our fiscal year 2008 fieldwork. We will continue to evaluate IRS’s corrective actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Open. IRS continues to work to improve security and control access issues in the TACs. Of the 401 TAC locations, 183 have been built to design standard, with another 14 scheduled for completion by the end of January 2009. Forty-five projects have been approved to implement the TAC model in 2009, with another 30 projects pending final approval and funding. Forty-four projects are in development for implementation from 2010 through 2014. IRS will work to address any concerns with the space design/layout of TAC space and continue to roll out the TAC Design Model in the remaining locations. While implementation of the TAC Model Design is the ideal solution, implementation of compensating controls such as theater ropes or other barriers, signage and minor alterations/reconfigurations have been incorporated in many TAC locations as an interim measure. Using a variety of criteria including security, safety and health concerns, IRS has identified priority locations for the implementation of the TAC Design Model. Open. We will continue to evaluate IRS’s actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Open. Field Assistance uses the TAC Security Remittance Review Database, which requires managers to conduct and document their reviews to ensure the protection of data and compliance with remittance and security procedures. Field Assistance implemented the TAC Security Remittance Review Database during the first quarter of fiscal year 2007. Since implementation, IRS has had numerous problems with the system due to technological limitations. Some of the problems IRS encountered include erroneously deleted information and an inability to save and transmit reports. IRS has attempted to secure funding and assistance to convert the database to a user-friendly Web version. The system was converted to a Web- modified application effective the second quarter of fiscal year 2009. This is only a temporary resolution until funding is secured. While the database was being revised, the area offices were still responsible for completing the reviews using Data Collection Instruments for the first quarter. In addition, IRS also tested the Web design prior to its implementation and has initiated a review process to engage headquarters, areas and territory management staff to identify and correct the database entries. The process will include sampling and conducting operational reviews as assurance of the database integrity. To enhance everyone’s understanding of the process, talking points will be developed for discussions between the territory and group managers. Open. IRS continues to implement its new process for providing oversight of TACs not having a manager permanently on-site during our fiscal year 2008 audit. Because the process was not fully functional, we were unable to test its implementation during our audit fieldwork. We will continue to assess IRS’s actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Open. IRM 10.2.14 Methods of Providing Protection will be revised by September 30, 2009, to state: “A record of all instances involving the activation of any alarm regardless of the circumstances that may have caused the activation, must be documented in a Daily Activity Report/Event Log, or other log book and maintained for 2 years.” Open. During our review and evaluation, we found that IRS’s corrective actions relating to the recordation of all instances involving alarm activations in the Daily Activity Report/Event Log, or other log book, were not included in the final version of the IRM. We will continue to assess IRS’s corrective actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. IRS performs monthly unannounced testing of guard response to alarms, and test results are reviewed by the Security Programs Office to enforce and ensure compliance. According to IRS, test results on guard response to alarms are consistently 98 percent or higher, indicating substantial compliance with IRS guidelines. Test procedures were formalized in IRM 10.2.14 Methods of Providing Protection issued on October 1, 2008. PSEP continues to utilize the Audit Management Checklist as a repeatable process, and SCCs validate quarterly the performance and documentation of monthly unannounced alarm testing. Closed. IRS revised IRM 10.2.14 to include requirements to perform and document monthly tests of intrusion detection alarms, including guard responses to alarms. Also, IRS’s Audit Management Checklist contains review steps for physical security analysts to determine whether SCCs and respective annex facilities that process taxpayer receipts and/or information perform and document monthly tests of intrusion alarms. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-06-543R, May 12, 2006) Closed. This item remains closed since fiscal year 2006, with AWSS continuing to regularly follow up on disposal actions. During fiscal year 2008, IRS implemented a new wizard tool that caused a system glitch which prevented IRS from updating all disposals within 10 work days. Several IRS staff were aware of the glitch and were working on the issue. As a result, the disposal action that should have been updated in 10 days was actually updated in 15 work days. Open. In fiscal year 2006, IRS re- engineered the P&E asset retirement and disposal process to generate exception reports that enable management to regularly monitor the aging of transactions during the disposal process. However, our testing in fiscal years 2007 and 2008 noted that disposals shown on the exception report were not always being recorded in a timely manner. During our fiscal year 2009 audit, we will verify that the new software enhancement is operating as intended. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Closed. IRS revised the language in Lockbox Security Guidelines (LSG) 2.17.8 (9) to mitigate the risk as outlined in the Lockbox Electronic Bulletin issued on July 17, 2008. As of September 1, 2008, all lockbox sites use file encryption, and are in compliance with the requirements as outlined in the Lockbox Electronic Bulletin. Closed. IRS revised its LSG to require lockbox banks to encrypt backup media containing taxpayer information. IRS has included this issue as one of the areas tested during its annual reviews of information technology security at its lockbox banks. During our fiscal year 2008 internal control testing, we did not identify any instances where lockbox banks were not encrypting backup media containing federal taxpayer information. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Closed. IRS revised the language in LSG 2.17.8 (9) to mitigate the risk as outlined in the Lockbox Electronic Bulletin issued on July 17, 2008. As of September 1, 2008, all lockbox sites store backup media containing federal taxpayer information at an off-site location and are in compliance with the requirements as outlined in the Lockbox Electronic Bulletin. Closed. IRS revised its LSG to require lockbox banks to store backup media containing taxpayer information at an off-site location. IRS has included this issue as one of the areas tested during its annual information technology security reviews at lockbox banks. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Closed. IRS revised the Information Technology Data Collection Instruments, which are used during the annual reviews of lockbox banks, and the related instructions (1) to ensure that the data/image transmissions sent through the Lockbox Electronic Network are encrypted prior to transmission and (2) to validate that all backup media containing personally identifiable information is stored and protected as required in the Lockbox Electronic Bulletin. Closed. IRS revised its Information Technology Data Collection Instrument to test whether lockbox banks are (1) encrypting personally identifiable information prior to transmission and (2) storing backup media containing personally identifiable information at an appropriate off-site location. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Closed. PSEP developed and implemented an action plan requiring all SCCs to (1) perform and validate completion of an assessment of their CCTV to ascertain if it provided an unobstructed view of the exterior of the campus perimeter, and (2) identify problems and planned corrective actions to mitigate the identified problems. All SCCs validated completion of the CCTV assessment and a total of 16 problems were identified. Progress on corrective actions was monitored and reported to PSEP management on a monthly basis. All corrective actions were addressed: 14 were resolved by the installation of CCTV cameras and/or removal of obstructions, and 2 were determined by management to meet an acceptable level of risk. PSEP continues to utilize the Audit Management Checklist as a repeatable process where SCCs quarterly validate CCTV coverage of the campus fence line and perimeter. The reported corrective actions were completed January 10, 2008. PSEP will continue to place emphasis on CCTV camera coverage, as well as perform regularly scheduled risk assessments of IRS facilities. Open. On January 10, 2008, IRS completed an assessment of its CCTVs in all SCCs to ascertain whether they provided an unobstructed view of its campuses’ exterior perimeter. However, IRS’s assessment did not account for the CCTV weaknesses that were reported in the Fresno SCC’s January 2007 risk assessment, which continued to exist during our April 2009 visit. During our visit, we found that the CCTVs did not provide an unobstructed view of the building exterior or fence line, many of the CCTVs were not wired properly and could not be used to their full potential. While these weaknesses were reported in the January 2007 risk assessment, Fresno was one of the four SCCs that did not report any specific weaknesses to the PSEP management that requested the assessment of the CCTVs. In view of the weaknesses we observed, it is unclear how the Fresno campus reached its conclusion that no CCTV problems were reportable to the PSEP requestors performing the assessment. We will continue to assess IRS’s actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Open. All W&I functions, except Accounts Management, conducted training during 2007 and 2008 for manual refund initiators to ensure they fulfill their responsibilities to monitor manual refunds and document their monitoring actions to prevent the issuance of duplicate refunds. W&I Compliance completed its training for manual refund initiators in the W&I campuses in April 2008. SP conducted refresher training during fiscal years 2007 and 2008 (continuing professional education) and will include again in the fiscal year 2009 continuing professional education. SP management reviews history sheets annotated with taxpayer identification numbers, tax period, transaction code, date, and initials of initiator. Accounts Management manual refund training has been delayed due to the Economic Stimulus Package workload. Accounts Management is re-examining manual refund monitoring procedures and will reschedule the training in fiscal year 2009 once the review is complete and any changes implemented. Open. During our fiscal year 2008 audit, we found instances where the manual refund initiators did not receive training on the most current requirements to help ensure that they fulfill their responsibilities to monitor manual refunds. We will continue to evaluate IRS’s corrective actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Closed. On January 20, 2008, SB/SE implemented the programming to check for outstanding liabilities associated with both the primary and secondary Social Security numbers on a joint tax return for offsetting to any outstanding TFRP liability before issuance of a refund. Closed. We verified that IRS implemented the programming change to check for outstanding liabilities associated with both the primary and secondary Social Security numbers on a joint tax return for offsetting to any outstanding TFRP liability before issuance of a refund. We reviewed the accounts of a number of taxpayers who (1) were assessed a TFRP, (2) filed a joint personal income tax return with a spouse, (3) listed her or his Social Security number as the second one on the tax return, and (4) had credits on the personal income tax account. In each of these cases, we verified that IRS’s computer program identified the outstanding TFRP and applied the credits to the TFRP balance before sending any refund to the taxpayer. Additionally, according to IRS, their analysis identified over $10 million of refund offsets that have occurred from January 2008 to March 2009 as a result of this corrective action. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Closed. SB/SE implemented a system change in January 2007 to correct the failure-to-pay (FTP) penalty calculation program. In June 2008, SB/SE conducted a review of the programming change and determined the program is correctly charging the reduced rate on subsequent assessments. There was a small subpopulation of accounts that the system change did not correct. IRS worked on an additional system change to correct penalty calculation programming affecting the remainder of the cases and completed its corrective action in August 2008. Closed. We verified that IRS’s system corrected the FTP penalty calculation program. We reviewed the accounts of a number of taxpayers for whom: (1) IRS increased the FTP penalty rate assessed against the taxpayer for failing to pay taxes owed from 0.5 percent to 1 percent when the taxpayer failed to pay following repeated notification of the taxes due, (2) the taxpayer subsequently paid off the balance for the specific tax period, and (3) following its system change, IRS assessed the taxpayer additional taxes owed for the same tax period and a related FTP penalty. In each of these cases, we verified that the FTP penalties were calculated in accordance with the applicable IRM guidance. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Closed. IRS implemented in January 2007 and August 2008 the change to the FTP penalty calculation program and also recalculated the FTP amount using the correct rate on all open taxpayer accounts with this penalty. Closed. We verified that IRS’s system change resulted in FTP penalties being calculated in accordance with the applicable IRM guidance on open taxpayer accounts. We reviewed the accounts of a number of taxpayers from IRS’s unpaid assessment inventory for whom: (1) IRS had increased the FTP penalty rate assessed against the taxpayer for failing to pay taxes owed from 0.5 percent to 1 percent when the taxpayer failed to pay following repeated notification of the taxes due, (2) the taxpayer subsequently paid off the balance for the specific tax period, and (3) IRS assessed the taxpayer additional taxes owed for the same tax period, with related FTP penalties. In each of these cases, we verified that the total recorded FTP penalty assessments on the account were in accordance with the applicable IRM guidance. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Closed. SB/SE published IRM 5.1.2.5.3 in September 2008 with revisions to 5.1.2.5.3.1(1) through (7) directing employees to make the specific determinations and to take the specific actions contained in this recommendation. Closed. IRS revised its IRM in September 2008 to include instructions specifically addressing this recommendation. The IRM now instructs IRS employees to (1) determine if the payment is sufficient to cover the tax liability of the tax period specified on the payment, (2) perform additional research and resolve any outstanding issues on the account, including determining if there are any freeze codes that will delay credit posting, (3) determine whether the taxpayer has outstanding balances in other tax periods, and (4) apply available credits to satisfy the outstanding balances in other tax periods. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Closed. SB/SE published IRM 5.1.2.5.3 in September 2008 with revisions to 5.1.2.5.3.1(1) through (7) directing employees to make the specific determinations and to take the specific actions contained in this recommendation. Closed. IRS revised its IRM in September 2008 to include instructions specifically addressing this recommendation. The IRM now instructs IRS employees to (1) determine if the payment is sufficient to cover the tax liability of the tax period specified on the payment, (2) perform additional research and resolve any outstanding issues on the account, including determining if there are any freeze codes that will delay credit posting, (3) determine whether the taxpayer has outstanding balances in other tax periods, and (4) apply available credits to satisfy the outstanding balances in other tax periods. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Open. SB/SE has requested Counsel guidance related to lien releases after discharge to determine if a memorandum is needed. SB/SE will issue a memorandum to employees by May 2009, if necessary. Open. As part of its own fiscal year 2008 OMB A-123 testing of lien releases, IRS tested a statistical sample of taxpayer accounts requiring a lien release during 2008. In its testing, IRS again identified a case in which it did not release the applicable federal tax lien within the statutory 30- day period because it did not update the taxpayer’s account in a timely manner to reflect that the taxpayer had been discharged of the taxes in a bankruptcy court. The untimely recording of bankruptcy discharges results in the untimely release of tax liens and is directly related to IRS’s noncompliance with Internal Revenue Code Section 6325 which requires IRS to release its tax liens within 30 days of the date the related tax liability is fully satisfied. We will continue to review IRS’s corrective actions to address this recommendation during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Closed. W&I Compliance continues to use the Installment Agreement Account Listings (IAAL) report to monitor user fee activity. In January 2008, IRS implemented enhancements to the report and increased the frequency of the sweep process from quarterly to weekly. Closed. IRS runs edit checks to test the validity of recorded installment agreements, including the user fees, which results in the identification of potential errors that are then listed on the IAAL. We verified that IRS improved its IAAL report process by grouping items that appear on the IAAL into tiers based on priority and establishing time frames by tier for investigating and resolving these potential errors. In addition, we confirmed that IRS now performs managerial reviews on IAAL cases processed by its collection operations. IRS also increased the frequency of its computer sweep recovery process, which is intended to identify unrecorded user fees, from a few times a year to once a week, thus increasing the timeliness and accuracy of recorded individual taxpayer user fees. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Closed. W&I Compliance uses a weekly sweep process to reconcile installment agreement payments and adjusts those with discrepancies or errors to ensure that fees are accurately posted to the user fee account. Closed. W&I Compliance’s weekly sweep process is designed to identify and correct for unrecorded user fees collected with the initial installment agreement payment. We verified that IRS’s improvements to its installment agreement user fees monitoring process will help ensure that errors in recorded installment agreement user fees are identified and corrected in a more timely manner. Additionally, we did not identify any instances of errors in recorded installment agreement user fees during our fiscal year 2008 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Closed. W&I Compliance uses the Installment Agreement Account Listings report to identify accounts with user fee errors, underpayments, and overpayments that require adjustments. W&I consolidated the report listing at one location to provide improved oversight of the process. Both W&I and SB/SE program analysts, managers, operations management, and headquarters staff conduct reviews of the report listing. In January 2008, IRS implemented enhancements to the report and increased the frequency of the sweep process used to correct accounts from quarterly to weekly. IRS also updated IRM 5.19.1 in January 2008 to include requirements for case analysis and documentation. Closed. We verified that IRS conducts managerial and operational reviews on its W&I Compliance Service Collection Operations, the division responsible for making the appropriate adjustments for errors in recorded installment agreement user fees. Additionally, we did not identify any errors in recorded installment agreement user fees tested during our fiscal year 2008 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Open. The IRS plans to implement the following procedures to ensure that sufficient secured space is maintained for Automated Data Processing (ADP) and Non-ADP assets: Requesters needing space are to initiate an Employee Resource Center ticket requesting “Property Consultation” services, which initiates Real Estate and Facilities Management (REFM) activity to work with the requester on obtaining the needed secured storage space. When Modernization & Information Technology Services property managers need secure storage, narrative associated with the Employee Resource Center work ticket must state: “Need to consult with local REFM staff on providing a secure storage alternative for ADP equipment.” This procedure is to be used for asset distribution staging or when assets are to be excessed. This policy is effective March 30, 2009. Open. IRS completed its corrective action plan after the end of our fiscal year 2008 audit. We will review IRS’s corrective actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-07-689R, May 11, 2007) Closed. AWSS Procurement issued policy Change Notice 07-08, which contains a revision to Policy and Procedure Memorandum 46.5, Receipt, Quality Assurance and Acceptance. The revision limits situations in which contracting officers may perform receipt and acceptance. In addition to the Policy and Procedure Memorandum 46.5, the IRS Acquisition Procedure Subpart 1003.90—Separation of Duties and Management Controls—requires separation of duties for requisition approval, certification of funds, contract award, and receipt and acceptance. Procurement runs Web Request Tracking System reports to review the instances where contracting officers performed receipt and acceptance to ensure that the receipt and acceptance falls within exceptions/procedures outlined in the Policy and Procedure Memorandum 46.5. Open. During our fiscal year 2008 audit, we noted that IRS revised its policy to reflect the situations under which contracting officers may perform receipt and acceptance functions. In addition, the IRS Acquisition procedures require that no employee shall perform more than one of the following four functions: (1) requisition approval for supplies and/or services, (2) certify the availability of funds, (3) conduct the procurement and execute the contractual document, and (4) receive the supplies or services. However, during our fiscal year 2008 audit testing, we continued to find instances where individuals were performing incompatible functions. We will continue to review actions taken by IRS during our fiscal year 2009 audit. Management Report: IRS’s First Year Implementation of the Requirements of the Office of Management and Budget’s (OMB) Revised Circular No. A-123 (GAO-07-692R, May 18, 2007) Closed. IRS revised its A-123 guidance to include templates and procedures for compiling, referencing, and reviewing audit working papers to ensure that the results of internal control tests are clear and complete to explain how control procedures were tested, what results were achieved, and how conclusions were derived from those results. During the fiscal year 2008 cycle, the Office of Corporate Planning and Internal Control assigned test team leaders and independent Office of Corporate Planning and Internal Control reviewers to examine workpapers to ensure the test team sufficiently documented their work to support their conclusions. The A-123 guidance requires that each set of work papers include a summary of findings statement setting out the conclusion reached after performing the transaction testing. Closed. During our fiscal year 2008 IRS financial audit, we verified that IRS revised its A-123 guidance to include templates that clearly outline how to document and explain what control tests were performed, the scope of control tests, and the results of internal control tests performed. IRS’s A-123 guidance also requires that each set of workpapers include a summary of findings statement that clearly concludes on results of test procedures performed by staff. We verified that IRS’s workpapers documenting A-123 testing substantially conformed to the A-123 guidance. Management Report: IRS’s First Year Implementation of the Requirements of the Office of Management and Budget’s (OMB) Revised Circular No. A-123 (GAO-07-692R, May 18, 2007) Closed. During the development of fiscal year 2008 A-123 internal control test plans, IRS analyzed and documented open recommendations related to the internal control process/transaction being tested. IRS considered the open recommendation findings while developing the process/transaction test plan. IRS will continue to incorporate the open recommendation findings while planning A-123 testing. Closed. During fiscal year 2008, we verified that IRS included a requirement in its A-123 guidance to determine the adequacy and value of management actions taken in response to audits performed by GAO and the Treasury Inspector General for Tax Administration relating to financial reporting. We also verified that IRS review staff followed the A-123 guidance in performing internal control reviews. Management Report: IRS’s First Year Implementation of the Requirements of the Office of Management and Budget’s (OMB) Revised Circular No. A-123 (GAO-07-692R, May 18, 2007) Open. IRS will continue to work with Treasury and Modernization & Information Technology Services to fully implement A-123 requirements for evaluating controls over information technology relating to financial statement reporting. IRS will identify areas where the work conducted under FISMA does not meet A-123 requirements and consider information security findings and recommendations to ensure testing procedures meet A- 123 requirements. Open. We will follow up during future audits to assess IRS’s progress in implementing this recommendation. Management Report: IRS’s First Year Implementation of the Requirements of the Office of Management and Budget’s (OMB) Revised Circular No. A-123 (GAO-07-692R, May 18, 2007) Open. IRS revised a limited set of fiscal year 2008 test plans to pilot the requirement to include an analysis of the design for each transaction control set tested. This project is planned for completion during the fiscal year 2009 A-123 cycle. Open. We verified that IRS revised a limited number of A-123 test plans to include an analysis of the design of internal controls tested. During our fiscal year 2009 audit, we will continue to review the remaining test plans as IRS revises them. Management Report: IRS’s First Year Implementation of the Requirements of the Office of Management and Budget’s (OMB) Revised Circular No. A-123 (GAO-07-692R, May 18, 2007) Closed. In fiscal year 2007, IRS established an internal crosswalk between A-123 tests and laws and regulations significant to financial reporting. In fiscal year 2008, IRS updated the crosswalk to a listing of laws and regulations which were expanded to include all specific public laws and took the additional step of incorporating GAO audit methodology into the linkage. Closed. We obtained and reviewed IRS’s laws and regulations crosswalk and verified that IRS had identified and planned appropriate procedures to test controls over laws and regulations considered significant to financial reporting. Management Report: IRS’s First Year Implementation of the Requirements of the Office of Management and Budget’s (OMB) Revised Circular No. A-123 (GAO-07-692R, May 18, 2007) Open. IRS is considering alternative procedures for testing transactions to provide assurance for the last 3 months of the fiscal year. Although implementation of such procedures is not necessary until elimination of the outstanding material weaknesses, IRS intends to propose follow-up procedures before the end of the fiscal year. Open. We will follow up during future audits to assess IRS’s progress in implementing this recommendation. Management Report: IRS’s First Year Implementation of the Requirements of the Office of Management and Budget’s (OMB) Revised Circular No. A-123 (GAO-07-692R, May 18, 2007) Closed. Members of the IRS A-123 workgroup completed the United States Department of Agriculture Graduate School course, Audit Evidence and Working Papers, covering methods for collecting and documenting types of evidence needed to support audit reports and to meet professional standards, during the fall of 2007. IRS used concepts from this course and best practices from previous cycles to improve the curriculum over previous years for the annual IRS A-123 Training Workshop to improve proficiency in documentation and analysis in the transactional testing. The training also covers the process to be followed when reviewing or performing tests of internal controls, developing a determination as to whether or not the controls are functioning properly, and evaluating the materiality of errors. The Office of Corporate Planning and Internal Control is currently developing an IRM provision for reference to reinforce the A-123 guidance provided during the training. Closed. We verified that IRS developed an appropriate annual training workshop designed to ensure that their A-123 review staff enhance their skills in workpaper documentation, identification and testing of internal controls, and evaluation and documentation of test results. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Open. IRS instituted the use of trace identification numbers for revenue and refund transactions in fiscal year 2008 to provide traceability from the general ledger for tax transactions back to source documentation and throughout IRS financial management systems. IRS is currently developing additional internal controls for tax revenue transactions processed outside of the Electronic Federal Tax Payment System, and for transactions recorded into IRACS requiring manual transcription. IRS is working to revise each appropriate IRM provision and requested programming to implement system controls in payment systems to prevent, detect, and correct such transcription and input errors by fiscal year 2010. IRS is also developing the Redesign Revenue Accounting Control System, an enhancement of IRACS that will incorporate the United States Standard General Ledger. IRS plans to implement Redesign Revenue Accounting Control System in January 2010. Open. During our future audits, we will continue to evaluate IRS’s progress in achieving transaction traceability for tax revenues processed outside of the Electronic Funds Transaction Payment System and taxes receivable transactions. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Open. Revenue Financial Management documented the procedures the statistician performs in each step of the unpaid assessments estimation process by June 2008. Revenue Financial Management is enhancing each of these procedures to include additional steps based on the fiscal year 2008 audit. Revenue Financial Management will provide the new procedures by May 2009. Open. During our fiscal year 2008 audit, we continued to find errors in IRS’s unpaid assessment estimates that were not detected by IRS’s internal reviews. IRS corrected these errors after we brought them to its attention. However, until IRS fully documents the specific procedures performed by its statistician in each step of the unpaid assessment estimation process and the specific procedures for reviewers to follow in their reviews, IRS faces increased risk that errors in this process will not be prevented or detected and corrected. We will continue to review IRS’s corrective actions to address this recommendation during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Open. In June 2008, Revenue Financial Management documented the procedures reviewers should follow during their review of the statistical estimates. Revenue Financial Management is adding additional levels of review and oversight for fiscal year 2009 and is finalizing a Memorandum of Understanding with the Office of Program Evaluation and Risk Analysis to perform an independent review. Open. During our fiscal year 2008 audit, we continued to find errors in IRS’s unpaid assessment estimates that were not detected by IRS’s internal reviews. IRS corrected these errors after we brought them to its attention. However, until IRS fully documents the specific procedures performed by its statistician in each step of the unpaid assessment estimation process and the specific procedures for reviewers to follow in their reviews, IRS faces increased risk that errors in this process will not be prevented or detected and corrected. We will continue to review IRS’s corrective actions to address this recommendation during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. In January 2009, IRS implemented programming changes to the Business Master File computer program where accuracy-related penalties assessed subsequent to the programming change will carry the same date as the related deficiency assessment. Open. IRS completed its corrective action after the end of our fiscal year 2008 audit. We will review IRS’s corrective action to address this recommendation during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. IRS assembled a team of interest and penalty subject matter experts to perform a review of master file programming of penalty and interest computations. The review included a general random sample of open modules as well as a sample of modules impacted by recent implementation of programming changes. SB/SE performed the review the week of May 19, 2008. SB/SE will continue to perform these reviews periodically and implement any necessary changes to programming as a result. Closed. We confirmed that IRS completed its review of existing master file computer programs that affect penalty calculations and documented a listing of instances in which programs are not functioning in accordance with the intent of the IRM. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. IRS formed a cross- functional working group to address penalty and interest programming issues in August 2007. This group meets biweekly and continues to identify and assess penalty and interest issues. When issues that need correction are identified, programming changes are requested and IRS performs subsequent testing to ensure that the programming change resolved the issue. Resolutions of these identified issues are in various stages. Other issues are being discussed with Modernization & Information Technology Services to determine the most effective way to implement programming changes, and on certain cases an impact analysis determined correction is not cost effective at this time. Solutions to identified systemic differences between IRS systems that cannot be fixed under the current processing system are being addressed by modernization efforts. Open. Although IRS completed its review of master file computer programs that affect penalty calculations and has planned a series of corrective actions, it has not yet completed all of the required programming corrections. We will continue to review IRS’s corrective actions to address this recommendation during our fiscal year 2009 and future audits. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. Managers follow IRM 1.4.11 as comprehensive guidance for conducting reviews at all TACs. TAC managers use the one-day receipt per TAC per quarter process to ensure at least once per quarter, the manager performs a one day review of all payment receipts as well as the documents associated with the receipts for all employees with payment receipts on the date chosen for review. Area directors are responsible for the oversight of all TAC activities including outlying post of duties. IRM 1.4.11.6.2 outlines the scheduled routine visit requirement for each TAC and Exhibit 1.4.11-11 gives a description of all required reviews for each TAC, including the frequency. Validation of completion is documented through operational reviews. The results of the operational reviews indicate a summary of findings, which included a corrective action report, completed annually. Open. IRM 1.4.11 provides guidance for managerial reviews and frequency of these reviews at outlying TACs. Also, the IRM outlines the TAC Security Remittance Review Database process and requires managers to input the results of their reviews into the database. However, the database was not fully implemented in fiscal year 2008. As a result, we were unable to fully test its implementation during our audit fieldwork. We will review IRS’s corrective actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. The Director of Field Assistance issued a quarterly reminder to managers to conduct required reviews on September 30, 2008. Field Assistance continues to review the monthly reports received from field offices, including the status of corrective actions noted during operational reviews, to ensure completion of needed improvements. Open. We will review IRS’s corrective actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. IRM 1.4.11.19.4.1.1 was revised in April 2008 to mandate the use of the “restrict” command code in all cases. Group managers will continue to be reminded of the existing requirements to restrict command codes as part of the Form 809 Annual Reconciliation Review. During this review, group managers use a check sheet as shown in IRM 3.8.45.29.15, which includes this validity check. The result of the review is sent to territory managers and Submission Processing. Furthermore, restricted IDRS command codes are addressed in ongoing operational reviews. IRM 1.4.11.19.4 guidance is provided to restrict the 809 book holders profile when ordering the initial 809 receipt book. IRM 1.4.11.19.4.1.1 establishes the requirement for group managers to use restrict command codes from an 809 book holders profile. IRM 1.4.11-15 TAC Payment Processing Checklist is completed as part of the payment processing review conducted quarterly, which includes a question addressing restrict command codes. Finally, IRM 1.4.11.19.4.1.1.1 covers the annual reconciliation of official receipts, which managers can use as an annual monitoring process in addition to operational reviews. Closed. IRS mandated the use of the restrict command codes to TAC employees accepting cash payments to limit their IDRS access rights and ability to adjust taxpayer accounts. These procedures are monitored during operational reviews conducted by area and territory managers, at which time group managers are reminded of the existing requirements to restrict command codes. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. Guidance concerning armed first responders to TAC duress alarms was reissued via email to area directors for distribution on August 19, 2008, and subsequently finalized in IRM 10.2.14, Methods of Providing Protection, issued October 1, 2008. The IRM specifies, “An armed ‘First Responder’ (guard police) must be listed as the first responder, as the shortest possible response time is critical with priority notification. The alarm notification priority protocols are: (1) First Priority: on-site guards are notified; (2) Second Priority, Federal Protective Service is notified, and (3) Third Priority, local police who will be notified last.” The TAC Scheduled Duress Alarm Test Report was revised to include a section to indicate the date the notification list for first responders was last updated. The reports are rolled up from the Areas/Territories to the Security Programs office quarterly. The revised report was instituted via e-mail on July 24, 2008. PSEP continues to utilize the Audit Management Checklist as a repeatable process where Territory offices validate that proper first responders are listed for notification. Closed. IRS established procedures in the IRM requiring quarterly verification that individuals designated as first responders to TAC duress alarms are appropriately qualified and geographically located to respond to the potentially dangerous situations in an effective and timely manner. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. IRS finalized and issued IRM 10.2.14, Methods of Providing Protection on October 1, 2008. IRM 10.2.14.9.2(7)a specifies: “An armed ‘First Responder’ (guard police) must be listed as the first responder, as the shortest possible response time is critical with priority notification. The alarm notification priority protocols are: (1) First Priority: on-site guards are notified; (2) Second Priority, Federal Protective Service is notified, and (3) Third Priority, local police who will be notified last.” The TAC Scheduled Duress Alarm Test Report was revised to include a section to indicate the date the notification list for first responders was last updated. The reports are rolled up from the Areas/Territories to the Security Programs office quarterly. The revised report form was instituted via e-mail on July 24, 2008. PSEP continues to utilize the Audit Management Checklist as a repeatable process where Territory offices validate that proper first responders are listed for notification. Closed. IRS revised the IRM to specify the qualifications and geographical proximity requirements for individuals designated as first responders and included a provision for PSEP to conduct quarterly reviews of this issue. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Open. AWSS has been working with the General Services Administration (GSA) since March 2008 to implement a process for procuring services from GSA to perform contractor background investigations. AWSS prepared and submitted a draft interagency agreement to GSA for consideration in June 2008. IRS received and reviewed the GSA comments, and is finalizing the interagency agreement for pricing and services. GSA has submitted a draft three-phase schedule for completion of the background investigations that would complete enter-on-duty determinations for all facilities by November 2009. Implementation is contingent upon GSA successfully completing its actions. Open. During our fiscal year 2008 audit, we identified instances at three TACs where IRS did not have documentary evidence demonstrating the completion of favorable background investigations for contractors performing janitorial services during non-operating hours. We will review IRS’s corrective actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Open. IRS developed a Performance Work Statement for a National Document Destruction Contract. IRS expects full contract implementation by October 1, 2009. Implementing a national contract will standardize these requirements and ensure consistency. In the interim, the current contracts require a review of contractor performance through site visits and to ensure that contractors comply with all security requirements for employee clearance prior to performing the work. AWSS distributed a message to the Real Estate and Facilities Management Territory Managers and Logistics Chiefs on January 23, 2009, reinforcing the requirement to review their existing shred contracts to ensure they comply with the security requirements stated in their respective contracts. Open. As stated in IRS’s response, the Performance Work Statement for a National Document Destruction Contract will not be fully implemented until the first quarter of fiscal year 2010. We will review IRS’s corrective actions during future audits. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Open. IRS developed a Performance Work Statement for a National Document Destruction Contract. IRS expects full contract implementation by October 1, 2009. Implementing a national contract will standardize these requirements and ensure consistency. In the interim, the current contracts require a review of contractor performance through site visits and to ensure that contractors comply with all security requirements for employee clearance prior to performing the work. IRS distributed a message on January 23, 2009, reinforcing the requirement to review their existing shred contracts to ensure they comply with the security requirements stated in their respective contracts. Open. As stated in IRS’s response, the Performance Work Statement for a National Document Destruction Contract will not be fully implemented until the first quarter of fiscal year 2010. We will review IRS’s corrective actions during future audits. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Open. IRS developed a Performance Work Statement for a National Document Destruction Contract. IRS expects full contract implementation by October 1, 2009. Implementing a national contract will standardize these requirements and ensure consistency. In the interim, the current contracts require a review of contractor performance through site visits, in order to ensure that contractors comply with all security requirements for employee clearance prior to performing the work. IRS distributed a message on January 23, 2009, reinforcing the requirement to review their existing shredding contracts to ensure they comply with the security requirements stated in their respective contracts. Open. As stated in IRS’s response, the Performance Work Statement for a National Document Destruction Contract will not be fully implemented until the first quarter of fiscal year 2010. In addition, during our fiscal year 2008 audit, we identified an instance at one of three SCCs we visited where shredding service contractor employees did not go through background investigations before they were granted access to taxpayer or other sensitive information. We will review IRS’s corrective actions during future audits. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. The Human Capital Office issued a notice in September 2007 to each Employment Branch Chief to reinforce this policy; and the office also sends periodic reminders to the Employment Offices during monthly calls with the employment staffs. The Human Capital Office also issued Alert 731-2 on September 29, 2008, to all Employment Offices clarifying the guidance provided in Policy No. 15. In October 2008, Policy and Programs received written confirmation from every Employment Office that Policy No. 15 was being followed and that the correct Form 13094 was being used. Open. During our fiscal year 2008 audit, we identified four juveniles hired in fiscal year 2008 who were not provided a revised Form 13094. We will review IRS’s corrective actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. The Human Capital Office revised Form 13094 in December 2007 and provided the form and accompanying instructions to the employment staff in January 2008. The Human Capital Office also issued Alert 731-2 on September 29, 2008, to all Employment Offices clarifying the guidance provided in Policy No. 15. In October 2008, Policy and Programs received written confirmation from every Employment Office that Policy No. 15 was being followed and that the correct Form 13094 was being used. Open. During our fiscal year 2008 audit, we identified five instances where the IRS employment office staff did not verify the information on Form 13094 by contacting the reference directly and documenting the details of that contact. We will review IRS’s corrective actions during our fiscal year 2009 audit. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. W&I Submission Processing issued a memorandum in April 2008 to the operations manager of Receipt and Control, reiterating the requirement to follow procedures in IRM 3.45.1 to conduct supervisory reviews of the deposit encoding tapes and the recapitulation of remittances, deposit tickets, and to sign or initial the documents as evidence that the reviews were completed. Receipt and Control is also following IRM 3.45.1 to conduct and document supervisory reviews of the TE/GE deposits. Closed. We verified that IRS issued a memorandum to its operations manager of Receipt and Control to reinforce procedures in its IRM requiring signed supervisory review of TE/GE user fee deposits. Additionally, during our fiscal year 2008 audit, we did not identify any instances where IRS did not document supervisory review of the TE/GE user fee deposits tested. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. The electronic Purchase Card Module eliminated the paper statement of accounts being mailed to purchase cardholders using the Purchase Card Module. The purchase cardholder and approving official electronically reconcile and approve the transactions, which is evidence of their signature approving the transactions. The system maintains history on the user login name and date of the action. Closed. We confirmed that IRS modified its existing guidelines and fully implemented the Purchase Card Module. During our fiscal year 2008 audit, we noted that the purchase card approving official’s signature attesting to the review and reconciliation of the monthly statement is now captured electronically by the Purchase Card Module. However, we also noted that the purchase card approving officials were not always electronically reconciling and approving transactions within the required timeframes documented in IRS’s existing guidelines. Timely reconciliation and approval of transactions is necessary to help ensure that purchase card transactions are valid and appropriate. Thus, we are closing this recommendation and opening a new recommendation to address this additional issue in our June 2009 management report. See GAO-09-513R and recommendation 09-10 in this report. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. IRS provides purchase cardholders with funding approval requirements during initial and refresher training. The guidelines outlining funding requirements are also available online in the Purchase Card Guide and on the program specific Web site. As IRS converted purchase cardholders to the Purchase Card Module, it highlighted this requirement in the transition guidelines. Closed. We confirmed that IRS modified its existing guidelines and fully implemented the Purchase Card Module. During our fiscal year 2008 audit, we noted that purchase cardholders obtained funding approval electronically through the Purchase Card Module prior to making a purchase. The Purchase Card Module directly interfaces with the funding requisition function of IRS’s Web- based Requisition Tracking System to verify funds availability. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. Citibank reports previously received by purchase card approving officials were eliminated with implementation of the Purchase Card Module. All documentation for purchase card activity is maintained electronically in the Purchase Card Module with the exception of packing slips/receipts, which are maintained by the cardholder. The documentation is available for review by the approving official, but approving officials are not required to maintain copies of documentation already maintained by the cardholder. Closed. Even though IRS did not modify its existing guidelines to require the purchase card approving official to maintain copies of the purchase cardholder’s supporting documentation, we confirmed that IRS now has compensating internal control procedures in place to close this recommendation. IRS’s existing guidelines require the purchase cardholder to maintain the supporting documentation and for approving officials to ensure that the cardholders have all required documentation. During our fiscal year 2008 audit, we noted that the purchase cardholders maintained appropriate supporting documentation. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. The requirement to maintain supporting documentation for all purchase card activity for 3 years is outlined in current guidance and training material provided to cardholders. The documentation is available for review by the approving official, but is maintained by the cardholder. Closed. Even though IRS did not modify its existing guidelines, we confirmed that the current guidelines require cardholders to maintain supporting documentation for 3 years. IRS’s existing guidelines require the purchase cardholder to maintain the supporting documentation and for approving officials to ensure that the cardholders have all required documentation. During our fiscal year 2008 audit, we noted that the purchase cardholders maintained appropriate supporting documentation. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. Modernization & Information Technology Services issued a memorandum dated September 5, 2008, and Directive (Asset Management Policy Directive AM 034) dated August 18, 2008, to all organizations reiterating the IRS policy that new assets must be inputted into the inventory system within 10 days of receipt. Closed. During our fiscal year 2008 audit, IRS’s Associate Chief Information Officer for End User Equipment Services, in response to our recommendations, issued a memorandum to all personnel responsible for updating inventory. The memorandum reiterated IRS’s existing policy requiring that new assets be inputted into the inventory system within 10 days of receipt. Management Report: Improvements Needed in IRS’s Internal Controls (GAO-08-368R, June 2008) Closed. AWSS issued communications to all employees reiterating the policy requiring all employees to obtain approval of travel authorizations before the initiation of travel through periodic notices on the IRS intranet with links to Travel Times. In Travel Times, IRS has issued: Travel Authorization Reminders (October 2007 and February 2008) and Travel Authorization Reminder News from the business units (December 2007, February 2008, and May 2008). Furthermore, IRS is continuing to implement GovTrip and as of January 1, 2009, has 25,775 GovTrip users. All users must file a travel authorization before travel begins, and GovTrip will not allow a voucher to be created without a signed/approved authorization. Open. We confirmed that IRS issued communications to staff reiterating the policy that all employees receive travel authorization before commencing travel, and that IRS continues to implement its GovTrip system with full implementation expected by approximately July 2009. However, during our fiscal year 2008 audit, we continued to identify instances where IRS staff did not obtain approval of travel authorizations in advance of travel. We will continue to review actions being taken by IRS to address this recommendation during our fiscal year 2009 audit. Management Report: Improvements Are Needed to Enhance IRS’s Internal Controls and Operating Effectiveness (GAO-09-513R, June 2009) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will verify IRS’s corrective actions during future audits. Management Report: Improvements Are Needed to Enhance IRS’s Internal Controls and Operating Effectiveness (GAO-09-513R, June 2009) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will verify IRS’s corrective actions during future audits. Management Report: Improvements Are Needed to Enhance IRS’s Internal Controls and Operating Effectiveness (GAO-09-513R, June 2009) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will verify IRS’s corrective actions during future audits. Management Report: Improvements Are Needed to Enhance IRS’s Internal Controls and Operating Effectiveness (GAO-09-513R, June 2009) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will verify IRS’s corrective actions during future audits. Management Report: Improvements Are Needed to Enhance IRS’s Internal Controls and Operating Effectiveness (GAO-09-513R, June 2009) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will verify IRS’s corrective actions during future audits. Management Report: Improvements Are Needed to Enhance IRS’s Internal Controls and Operating Effectiveness (GAO-09-513R, June 2009) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will verify IRS’s corrective actions during future audits. Management Report: Improvements Are Needed to Enhance IRS’s Internal Controls and Operating Effectiveness (GAO-09-513R, June 2009) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will verify IRS’s corrective actions during future audits. Management Report: Improvements Are Needed to Enhance IRS’s Internal Controls and Operating Effectiveness (GAO-09-513R, June 2009) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will verify IRS’s corrective actions during future audits. Management Report: Improvements Are Needed to Enhance IRS’s Internal Controls and Operating Effectiveness (GAO-09-513R, June 2009) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will verify IRS’s corrective actions during future audits. Management Report: Improvements Are Needed to Enhance IRS’s Internal Controls and Operating Effectiveness (GAO-09-513R, June 2009) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will verify IRS’s corrective actions during future audits. Management Report: Improvements Are Needed to Enhance IRS’s Internal Controls and Operating Effectiveness (GAO-09-513R, June 2009) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will verify IRS’s corrective actions during future audits. Management Report: Improvements Are Needed to Enhance IRS’s Internal Controls and Operating Effectiveness (GAO-09-513R, June 2009) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will verify IRS’s corrective actions during future audits. Management Report: Improvements Are Needed to Enhance IRS’s Internal Controls and Operating Effectiveness (GAO-09-513R, June 2009) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will verify IRS’s corrective actions during future audits. Management Report: Improvements Are Needed to Enhance IRS’s Internal Controls and Operating Effectiveness (GAO-09-513R, June 2009) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will verify IRS’s corrective actions during future audits. Management Report: Improvements Are Needed to Enhance IRS’s Internal Controls and Operating Effectiveness (GAO-09-513R, June 2009) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will verify IRS’s corrective actions during future audits. Management Report: Improvements Are Needed to Enhance IRS’s Internal Controls and Operating Effectiveness (GAO-09-513R, June 2009) Because this is a recent recommendation, GAO did not obtain information on IRS’s status in addressing it. Open: This is a recent recommendation. We will verify IRS’s corrective actions during future audits. The Internal Revenue Service (IRS) does not have financial management systems adequate to enable it to accurately generate and report, in a timely manner, the information needed to both prepare financial statements and manage operations on an ongoing basis. To overcome these systemic deficiencies with respect to preparation of its annual financial statements, IRS was compelled to employ compensating procedures. Specifically, IRS (1) did not have an adequate general ledger system for tax-related transactions, and (2) was unable to readily determine the costs of its activities and programs and did not have cost-based performance information to assist in making or justifying resource allocation decisions. As a result, IRS does not have data to assist in managing operations on a day-to-day basis and to provide an informed basis for making or justifying resource allocation decisions. IRS has serious internal control issues that affected its management of unpaid tax assessments. Specifically, IRS (1) lacked a subsidiary ledger for unpaid tax assessments that would allow it to produce accurate, useful, and timely information with which to manage and report externally, and (2) experienced errors and delays in recording taxpayer information, payments, and other activities. Significant information security weaknesses continue to jeopardize the confidentiality, availability, and integrity of information processed by IRS’s key systems, increasing the risk of material misstatement for financial reporting. For example, sensitive information, such as user identification and passwords for mission-critical applications, continued to be readily available to any user on IRS’s internal network. These IDs and passwords could be used by a malicious user to compromise data flowing to and from IFS. Other continuing weaknesses included the existence of passwords that were not complex enough to avoid being guessed or cracked. In addition, although IRS had improved its application of vendor-supplied system patches that protect against known vulnerabilities, it still had not patched systems in a timely manner. The agency’s procurement system, which processed approximately $1.8 billion of obligations in fiscal year 2008, also remained at risk because previously reported weaknesses had not been corrected. These weaknesses included (1) not restricting user’s ability to bypass application controls, (2) continuing to use unencrypted protocols, and (3) not removing separated employees’ access in a timely manner. These outstanding weaknesses increase the risk that data processed by the agency’s financial management systems are not reliable. Material Weakness: Controls over Information Systems Security Although IRS has made some progress in addressing previous weaknesses we identified in its information systems security controls and physical security controls, these and new weaknesses in information systems security continue to impair IRS’s ability to ensure the confidentiality, integrity, and availability of financial and tax-processing systems. As of January 2009, there were 74 open recommendations from our information systems security work designed to help IRS improve its information systems security controls. Those recommendations are reported separately and are not included in this report primarily because of the sensitive nature of some of the issues. Weaknesses in control over tax revenue and refunds continue to hamper IRS’s ability to optimize the use of its limited resources to collect unpaid taxes and minimize payment of improper refunds. Specifically, IRS has not (1) developed performance metrics and goals on the cost of, and the revenue collected from, IRS’s various enforcement programs and activities, with the exception of the Earned Income Tax Credit program; or (2) fully established and implemented the financial management structure and processes to provide IRS key financial management data on costs and enforcement tax revenue. These deficiencies inhibit IRS’s ability to appropriately assess and routinely monitor the relative merits of its various enforcement initiatives and adjust its strategies as needed. This, in turn, can significantly affect both the level of enforcement tax revenue collected and improper refunds disbursed. IRS did not always release the applicable federal tax lien within 30 days of the tax liability being either paid off or abated, as required by the Internal Revenue Code (section 6325). The Internal Revenue Code grants IRS the power to file a lien against the property of any taxpayer who neglects or refuses to pay all assessed federal taxes. The lien serves to protect the interest of the federal government and as a public notice to current and potential creditors of the government’s interest in the taxpayer’s property. The recommendations listed below pertain to issues that do not rise individually or in the aggregate to the level of a significant deficiency or a material weakness. However, these issues do represent weaknesses in various aspects of IRS’s control environment that should be addressed. In addition to the contact named above, the following individuals made major contributions to this report: William J. Cordrey, Assistant Director; Ray Bush; Stephanie Chen; Nina Crocker; Oliver Culley; Charles Ego; Doreen Eng; Charles Fox; Valerie Freeman; Ted Hu; Richard Larsen; Delores Lee; Gail Luna; Julie Phillips; John Sawyer; Christopher Spain; Cynthia Teddleton; Lien To; LaDonna Towler; and Gary Wiggins.
In its role as the nation's tax collector, the Internal Revenue Service (IRS) has a demanding responsibility to annually collect trillions of dollars in taxes, process hundreds of millions of tax and information returns, and enforce the nation's tax laws. Since its first audit of IRS's financial statements in fiscal year 1992, GAO has identified a number of weaknesses in IRS's financial management operations. In related reports, GAO has recommended corrective actions to address those weaknesses. Each year, as part of the annual audit of IRS's financial statements, GAO makes recommendations to address any new weaknesses identified and follows up on the status of IRS's efforts to address the weaknesses GAO identified in previous years' audits. The purpose of this report is to (1) provide the status of audit recommendations and actions needed to fully address them and (2) demonstrate how the recommendations relate to control activities central to IRS's mission and goals. IRS has made significant progress in improving its internal controls and financial management since its first financial statement audit in 1992, as evidenced by 9 consecutive years of clean audit opinions on its financial statements, the resolution of several material internal control weaknesses, and actions resulting in the closure of over 200 financial management recommendations. This progress has been the result of hard work throughout IRS and sustained commitment at the top levels of the agency. However, IRS still faces financial management challenges. At the beginning of GAO's audit of IRS's fiscal year 2008 financial statements, 81 financial management-related recommendations from prior audits remained open because IRS had not fully addressed the issues that gave rise to them. During the fiscal year 2008 financial audit, IRS took actions that GAO considered sufficient to close 35. At the same time, GAO identified additional internal control issues resulting in 16 new recommendations. In total, 62 recommendations remain open. To assist IRS in evaluating and improving internal controls, GAO categorized the 62 open recommendations by various internal control activities, which, in turn, were grouped into three broad control categories. The continued existence of internal control weaknesses that gave rise to these recommendations represents a serious obstacle that IRS needs to overcome. Effective implementation of GAO's recommendations can greatly assist IRS in improving its internal controls and achieving sound financial management and can help enable it to more effectively carry out its tax administration responsibilities. Most can be addressed in the short term (the next 2 years). However, a few recommendations, particularly those concerning IRS's automated systems, are complex and will require several more years to effectively address.
Multiple DHS offices, components, and agencies have roles and responsibilities in DHS’s development of CBRN risk assessments, response plans, and capabilities. Specifically: S&T is responsible for the development of DHS’s CBRN risk assessments—the TRAs and MTAs. CFO’s PA&E unit is responsible for developing resource allocation decisions for capability investments through DHS’s Planning, Programming, Budgeting and Execution system. OHA is responsible for leading DHS’s biological and chemical defense activities and provides medical and public health expertise to support the department’s efforts. OPS is responsible for coordinating DHS’s operational activities for incident response, including for CBRN incidents. POLICY is responsible for advising the Secretary of Homeland Security in the development of DHS’s policies for CBRN plans and capabilities. NPPD’s RMA is responsible for leading DHS’s approach to risk management and the application of risk information to departmental activities. DNDO is responsible for domestic radiological and nuclear detection efforts and integration of federal nuclear forensics programs. FEMA is responsible for leading the nation’s effort for preparing to respond to emergencies and disasters. DHS engages in risk management activities to help ensure the nation’s ability to protect against and respond to incidents using CBRN agents. DHS’s Risk Lexicon provides the following definitions for risk-related terms: Risk—potential for an adverse outcome assessed as a function of threats, vulnerabilities, and consequences associated with an incident, event, or occurrence. Risk Assessment—product or process which collects information and assigns values to risks for the purpose of informing priorities, developing or comparing courses of action, and informing decision making. Risk Management—process of identifying, analyzing, assessing, and communicating risk and accepting, avoiding, transferring, or controlling it to an acceptable level considering associated costs and benefits of any actions taken. The department’s 2010 Quadrennial Homeland Security Review notes the importance of incorporating information from risk assessments into departmental decision making, one aspect of the department’s homeland security risk management process. According to DHS doctrine, risk management applications include the use of risk information to inform, among others, strategic and operational planning and resource decisions. This report focuses on DHS’s use of the third step of its risk management process—risk assessment—and the application of risk assessment results to inform CBRN response plans and capabilities. DHS notes that risk information is usually one of many factors—not necessarily the sole factor—that decision makers consider when deciding which strategies to pursue for managing risk. These additional factors may include strategic and political considerations, among others. See figure 1 for a graphic depiction of DHS’s risk management process. DHS is responsible for assessing the risks posed by various CBRN agents, as directed by the Project BioShield Act of 2004 and Homeland Security Presidential Directives 10 – Biodefense for the 21st Century, 18 – Medical Countermeasures against Weapons of Mass Destruction, and 22 – National Domestic Chemical Defense. To this end, S&T develops CBRN TRAs and MTAs. Each TRA assesses the relative risks posed by multiple CBRN agents based on variable threats, vulnerabilities, and consequences. Since 2006, DHS has developed eight TRA reports: Biological Terrorism Risk Assessments (BTRA) in 2006, 2008, and Chemical Terrorism Risk Assessments (CTRA) in 2008 and 2010; Radiological/Nuclear Terrorism Risk Assessment (R/NTRA) in 2011; and Integrated CBRN Terrorism Risk Assessments (ITRA) in 2008 and 2011. Each MTA assesses the threat posed by a given, individual CBRN agent or class of agents and the potential number of human exposures in plausible, high-consequence scenarios. Since 2004, DHS has developed 17 MTA reports. DHS uses the MTAs to determine which CBRN agents pose a material threat sufficient to affect national security. The Project BioShield Act describes specific ways in which the MTAs may be used in efforts to procure certain medical countermeasures. However, the various presidential directives note that while the TRAs may be used to inform decision making, they are not specific as to when or how these risk assessments should be used by DHS officials to inform CBRN response planning or capability investments. (U) We identified CBRN-specific interagency response plans among three families of interagency plans developed by DHS that are designed for responding to CBRN incidents. These families of plans include plans developed under (1) Executive Order 13527 – Establishing Federal Capability for the Timely Provision of Medical Countermeasures Following a Biological Attack (Executive Order 13527) and (2) Homeland Security Presidential Directive 8 Annex I – National Planning (HSPD 8 Annex I), and as a part of (3) the National Response Framework’s (NRF) CBRN- specific incident annexes.reviewed. Since the first DHS CBRN risk assessments were developed in 2004, DHS used the risk assessments to varying degrees to directly or indirectly inform development of 9 of the 12 CBRN-specific response plans we identified. Our analysis showed that 2 of the 12 plans were directly informed by the risk assessments, while DHS officials told us that 7 of the 12 plans were indirectly informed by the risk assessments. However, we could not independently verify this for these 7 plans because DHS officials could not document how the risk assessments influenced information contained in the plans. Three of the response plans were not informed by the risk assessments, according to DHS officials. Our analysis of a limited set of planning assumptions in the plans compared to information contained in the risk assessments showed general consistency between the plans and the risk assessments. DHS’s guidelines state that response plans should be informed by risk assessment information to supplement risk-related information contained in the National Planning Scenarios (NPS) used for developing emergency response plans. DHS’s 2009 Integrated Planning System (IPS) also identifies risk assessments as one source of information that should be used to inform response plan development. This guidance, however, does not define what it means for response plans to be informed by risk assessments or how planners should use specific types of risk assessments, such as DHS’s TRAs and MTAs, when developing or revising related plans. Of the 12 CBRN response plans developed by DHS that we reviewed, none of the plans were developed solely in response to a given CBRN threat agent being identified as high risk in DHS’s CBRN risk assessments. Rather, these plans were developed in response to requirements in an executive order and as part of families of plans developed in response to provisions in presidential directives.table 3 for a list of the CBRN response plans and whether each plan was directly, indirectly, or not informed by DHS’s CBRN risk assessments during its development. Since 2004, DHS’s use of its CBRN risk assessments to inform its CBRN- specific capability investments has varied, from directly impacting its capabilities to not being used at all. Six of the seven CBRN capabilities we examined were informed by DHS’s CBRN risk assessments to some extent, according to program officials, DHS documents, and our analysis (see table 4).was directly informed by the risk assessments, while our analysis showed or DHS officials told us that five of the seven capabilities were partially Our analysis showed that one of the seven capabilities informed by the risk assessments. However, we could not independently verify this for three of these five capabilities because DHS officials could not document how the risk assessments influenced the capabilities. DHS has developed policies and guidance on the use of risk information for the department’s activities, but DHS has not issued guidance to program managers that specifies when or how they should use the CBRN risk assessments to inform CBRN capabilities (this is discussed in the third objective of this report). The DHS Strategic Plan for 2008-2013 states that resource decisions should be informed by relevant risk assessments, but does not provide specific guidance on when or how such decisions should be informed by the department’s CBRN risk assessments. Additionally, the Secretary of Homeland Security’s March 2011 Management Directive stated that DHS policy is to use risk information and analysis to inform decision making and instructs DHS components to establish mission-appropriate risk management capabilities. See table 4 for a summary of the CBRN capabilities and whether each capability was directly, partially, or not informed by DHS’s CBRN risk assessments. Our analysis showed that the extent to which the CBRN capabilities we examined were informed by DHS’s CBRN risk assessments varied, but DHS officials described reasons for this variance, as discussed below. In addition, DHS officials noted that basic scientific differences between chemical, biological, and radiological/nuclear threat agents and materials also provide explanations about the differences in how the CBRN risk assessments are used to inform capabilities. For example, DHS officials told us that the relative risk rankings amongst biological agents may be more meaningful than the ranking amongst radiological materials because there are greater differences associated with detecting biological agents, as well as their consequences. DHS program managers used the risk assessments to partially inform the program management of the RDCDS and the CSAC, as described below. The Director of the CSAC told us that because there are over 13 million possible chemicals that could be considered threat agents, it is impossible to come up with a relative risk ranking of all the chemicals. Therefore, the results of the CTRA are designed to be representative of the highest risk chemical agents and used as a guide—not a definitive resource—for informing capability and planning decisions related to such agents. Additionally, certain chemical compounds have similar enough compositions to be considered together when developing capabilities and response plans. Rapidly Deployable Chemical Detection System (RDCDS). The OHA Chemical Defense Program used the CTRA to partially inform its RDCDS, according to our analysis as well as DHS officials and documentation. We compared the lists of threat agents that have been programmed to be detected by RDCDS detectors against chemical agents of significant concern in the 2008 and 2010 CTRAs and found that they were generally consistent. The RDCDS program manager told us that the list of threat agents monitored by RDCDS has not changed since 2005 as DHS developed the first and second iterations of the CTRA. However, the program manager told us that when the first CTRA was issued in 2008, program officials reviewed its content to determine whether chemical agents of significant concern in the CTRA were aligned with the chemicals detected by RDCDS. The official said that based on this initial assessment, the RDCDS was generally aligned with the chemical agents of greatest concern. Chemical Security Analysis Center (CSAC). The Director of the CSAC told us that because a key CSAC mission is to develop the CTRA and the chemical MTAs, these risk assessments are used—to varying extents—to inform the other capabilities that the CSAC maintains. Other capabilities include providing 24/7 technical assistance to other DHS components that encounter possible chemical attack situations, such as the National Operations Center. According to the CSAC Director, information from the CTRA is included in the knowledge management system that is used in maintaining this technical assistance capability. However, the CSAC could not provide us with documentation of how it had used the CBRN risk assessments to inform this capability. Further, the CSAC Director told us that the CTRA also informs the CSAC’s work in developing models for other DHS components on the effects of certain chemical incidents. We reviewed a CSAC study for the DHS Transportation Security Administration about the release of chemical gases into the atmosphere and found that in the study the CSAC had modeled releases of two different chemical agents, both of which are among the chemical agents of significant concern in the CTRA. We analyzed the extent to which DHS officials used the BTRA and the biological MTAs to inform the program management of three capabilities—BioWatch, the NBFAC, and the NBIC. Our analysis showed that DHS program managers used the risk assessments to either directly inform (BioWatch) or partially inform (NBFAC and NBIC) their decisions, as described below. The program manager of BioWatch told us that it makes sense for the program to use the most reliable tool available to them—in this case, the CBRN risk assessments—to determine what agents to program into their detection system. The director of the NBFAC told us that the BTRA was used on one occasion to directly inform program management and prioritization. The NBIC branch chief told us that the BTRA is used at a strategic level and that the center’s staff is very familiar with the contents of the BTRA and the biological MTAs. However, he stated that the NBIC’s mission is to monitor detection efforts for all biological agents, particularly emerging infectious diseases, and to provide alerts about potentially dangerous biological incidents to state and local homeland security professionals. Therefore, the NBIC branch chief said that the relative risk ranking of a given biological agent would not be an appropriate basis for the prioritization of resources at the operational level. BioWatch. We found that the BioWatch program was generally consistent with the biological agents of significant concern identified in the BTRA. DHS documents state that the BTRA is to be used to update the list of threat agents monitored by BioWatch. DHS deployed BioWatch in 2003, before the release of the first BTRA in 2006. Since then, DHS has reprogrammed BioWatch detection efforts once, in response to the 2006 BTRA. The BioWatch program manager told us that they review each iteration of the BTRA to ensure that the BioWatch program is aligned with the biological agents of significant concern. We compared the lists of threat agents that have been programmed to be detected by the BioWatch program since 2006 against the biological agents of significant concern in the 2006, 2008, and 2010 BTRAs and found them generally consistent. The BioWatch program manager also told us that future generations of BioWatch are being developed to detect a larger number of biological threat agents. According to BioWatch documents, these agents are to be determined by the BTRA’s risk rankings. OHA officials told us they use the BTRA to inform BioWatch because it is the most relevant CBRN risk assessment available to them and because it allows OHA to focus BioWatch detection efforts on the biological agents of significant concern. National Bioforensic Analysis Center (NBFAC). We found that the NBFAC used the CBRN risk assessments to partially inform its capabilities. Officials from NBFAC told us that the center used information from the BTRA to inform its priorities for developing tools needed to support their work in biological forensic attribution. Our analysis showed that the NBFAC’s forensic attribution capabilities were generally consistent with the biological agents of significant concern in the BTRA. However, NBFAC officials stated that because the NBFAC is mandated to maintain capabilities for other biological materials, including biological agents that are not considered high risk, future BTRA results would not necessarily lead to reprioritization of NBFAC’s attribution capability development efforts. National Biosurveillance Integration Center (NBIC). We found that the NBIC used the CBRN risk assessments to partially inform its activities. According to the OHA branch chief responsible for the NBIC, NBIC personnel are aware of the information in DHS’s CBRN risk assessments and consider this information at a strategic level. However, the NBIC could not provide us with documentation of how it had used the CBRN risk assessments to inform its capabilities at the strategic level. The NBIC branch chief also stated that NBIC does not use information from the BTRA or biological MTAs at an operational level to inform the management of their capability. The official provided documentation showing that the NBIC’s mission is to collect and integrate information about biological agent detection from a variety of federal government detection systems. The OHA branch chief stated that because the NBIC’s mission is to integrate and provide alerts on all biological agents, including emerging and infectious diseases that are not included in the CBRN risk assessments, it is not relevant whether the biological agents the NBIC is monitoring are considered to be high risk according to the BTRA or the MTAs, although these agents are also monitored. Our analysis showed that DHS program managers’ use of the risk assessments to inform radiological and nuclear capabilities varied from partially informing their decisions to not informing their decision at all. Officials from the NTNFC told us that because of the relatively small universe of radiological and nuclear materials, the risk rankings among these materials did not matter as much as the relative threat and consequence information for biological and chemical agents. Additionally, DHS officials told us the challenges that first responders would face in responding to a nuclear explosion in a city may be a more important concern than the type of threat material used in such an attack. Nuclear Incident Response Teams (NIRTs). FEMA officials told us that information from the R/NTRA appendix to the 2008 ITRA partially informed the program management of the NIRTs and FEMA’s other nuclear response capabilities. Specifically, FEMA officials said that information in the R/NTRA appendix, among other sources, was used to inform FEMA’s IND Response and Recovery Program. Starting in 2010, FEMA officials said that NIRT-related activities were aligned with the IND Response and Recovery Program. FEMA officials told us that because of this, their management of the NIRTs is partially informed, by extension, by the R/NTRA appendix. However, FEMA could not provide us with documentation of how it specifically had used the R/NTRA appendix to inform the NIRTs. National Technical Nuclear Forensics Center (NTNFC). An NTNFC official told us that the NTNFC did not use information contained in the R/NTRA appendix to the 2008 ITRA, and the NTNFC does not intend to use the 2011 R/NTRA (once published) to inform its activities. The same official told us that these CBRN risk assessments do not provide useful information to inform NTNFC activities because nuclear forensic capabilities are developed for all radiological and nuclear materials, regardless of their relative risk. Further, he stated that NTNFC is already aware of the universe of possible radiological and nuclear materials that could be used to attack the nation. DHS S&T’s Chief Medical and Science Advisor, the official who oversees the development of DHS’s CBRN risk assessments, agreed that NTNFC’s capabilities need to be able to identify all radiological and nuclear materials, and that therefore the CBRN risk assessments were not relevant for NTNFC’s efforts. DHS policy states that DHS components should use risk assessment information to inform planning and capability investment decisions, but DHS has not established specific guidance, such as written procedures, that details when and how DHS components should consider using the department’s CBRN-specific risk assessments to inform such activities. According to the National Strategy for Homeland Security of 2007, the assessment and management of risk underlies all homeland security activities, including decisions about when, where, and how to invest in resources—including planning and capabilities—that eliminate, control, or mitigate risks. The TRAs and MTAs—the department’s most CBRN- specific risk assessments—were used to inform to varying extents 9 of 12 response plans and 6 of 7 capabilities we analyzed, and how the risk assessments were used to inform these plans and capabilities varied. DHS officials told us that while DHS policy calls for the use of risk information to inform the department’s activities, no DHS guidance specifically requires DHS officials to use the TRAs and MTAs for CBRN planning and capability investments or explains how officials should use the risk assessments to inform their decision making. As a result, the CBRN risk assessments were used to varying extents and in varying ways by DHS components for the plans and capabilities we analyzed. DHS officials said that they considered the risk assessments but chose not to use them to inform one of the plans and one of the capabilities we reviewed because they were not useful for the plan or the capability. In addition, the risk assessments were not considered at all for two of the plans we reviewed. Since at least 2007, DHS has emphasized the need to incorporate risk information derived from risk assessments into departmental activities, and since 2009 DHS has issued a range of guidance—including an interim framework, a policy memo, a management directive, and a doctrine—on the use of such risk information. Specifically, DHS’s Interim Integrated Risk Management Framework of January 2009 identified risk assessments as a fundamental information source for risk-informed decision making and noted that the BTRA and CTRA are examples of risk assessments produced by the department that can be used to inform risk management efforts. In May 2010, the Secretary of Homeland Security issued a policy memo that requires, among other things, the use of risk assessments to inform decision making and the establishment of mechanisms for sharing risk assessments with relevant stakeholders. In March 2011, as called for in the Secretary’s memo, DHS issued a management directive on integrated risk management at the department. This management directive, among other things, tasks the Director of the Office of Risk Management and Analysis (RMA) within DHS’s NPPD with establishing a system to facilitate the sharing of risk analysis and data across the department. Further, in April 2011, DHS issued its doctrine for risk management—titled Risk Management Fundamentals—the first in a series of publications that RMA plans to issue to provide a structured approach for the distribution and employment of risk information and analysis efforts across the department. DHS’s existing guidance on risk management generally identifies the importance of using risk assessments to inform departmental decision making, but it does not specifically address when and how particular risk assessments—including the TRAs and MTAs—should be considered for use by departmental entities for planning and capability investment purposes. DHS officials stated that more specific guidance has not been developed by the department or its components and agencies because they were not required to do so. However, Standards for Internal Control in the Federal Government state that officials should take actions, such as establishing written procedures, to help ensure that management’s directives are carried out.Management Framework of January 2009 stated that DHS must establish processes that make risk information available among the department and its components and agencies when and where it is needed, noting that the ability to receive and provide meaningful and usable risk information in a timely manner requires well coordinated and established processes. In addition, DHS’s Interim Integrated Risk While DHS has issued guidance that generally states that risk assessments should be used to inform departmental activities, DHS could better help to ensure that its relevant CBRN-specific risk assessments— the TRAs and MTAs—are considered for use in informing CBRN-specific planning and capability investments if more specific guidance requiring such consideration is established. DHS officials also stated that establishing written procedures for such consideration could better help to ensure that officials responsible for CBRN response planning and capability investment decision making consider the CBRN risk assessments as a means to obtain current risk information for specific CBRN threat agents. This information could be used to inform the planning assumptions that CBRN response plans are designed to address, as well as the requirements development process for CBRN capabilities. In addition, DHS officials noted that the lack of written procedures requiring DHS officials to consider using the TRAs and MTAs to inform DHS’s CBRN plans and capabilities could negatively affect the likelihood that future DHS officials consider using the risk assessments when planning and making investment decisions. By establishing more specific guidance that details when and how DHS components should consider using the TRAs and MTAs to inform CBRN plans and capabilities, DHS would be better positioned to ensure that officials consider and, as appropriate, incorporate the department’s most detailed CBRN-specific risk information. As a result, DHS would be better positioned to ensure that its CBRN response plans and capabilities align with the assumptions and results contained within the TRAs and MTAs. The anthrax attacks of 2001 raised concerns that the United States is vulnerable to terrorist attacks using CBRN agents. Since 2001, DHS has developed a range of CBRN risk assessments, response plans, and related capabilities to prepare for such attacks. DHS has spent at least $70 million developing these risk assessments. Using its CBRN risk assessments to help inform CBRN response planning and capability investments is consistent with DHS policy and could help to better ensure that relevant information contained in the risk assessments is used to inform such plans and capabilities. Further, given that there are thousands of CBRN agents that could potentially pose a risk to the nation in an era of declining federal budgets and constrained resources, the federal government must ensure that it is focusing its limited resources on preparing to respond to the highest risk agents. Without procedures for using the risk assessments to inform capability investment decision making, use of the assessments for such decisions may continue to vary or not occur at all. More specific guidance on when and how DHS officials should consider using the department’s CBRN risk assessments to inform planning and investments could better help to ensure their consistent use and that this use is sustained beyond the tenure of any given agency official. To better ensure the consistent use of DHS’s CBRN risk assessments at the department’s components and agencies, we recommend that the Secretary of Homeland Security: Establish more specific guidance, including written procedures, that details when and how DHS components should consider using the department’s CBRN risk assessments to inform related response plan and capability investment decision making. We received written comments on the draft report, which are reproduced in full in appendix I. DHS also provided technical comments, which were incorporated as appropriate. DHS concurred with the basis for the recommendation and discussed an action that S&T—which is responsible for developing the department’s CBRN risk assessments—plans to take to address the recommendation. Specifically, DHS noted that it is currently developing user guidelines for its CBRN risk assessments. In addition, DHS also stated that S&T is committed to continuing to work with relevant stakeholders to ensure that its risk assessments are useful for informing response planning and capability investment decision making. We are sending copies of this report to the Secretary of Homeland Security, appropriate congressional committees, and other interested parties. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any further questions about this report, please contact me at (202) 512-8777 or jenkinswo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. In addition to the contact named above, Edward George (Assistant Director), David Lysy (Analyst-in-Charge), David Schneider, Bonnie Doty, David Alexander, Tracey King, and Katherine Davis made key contributions to this report. National Preparedness: DHS and HHS Can Further Strengthen Coordination for Chemical, Biological, Radiological, and Nuclear Risk Assessments. GAO-11-606. Washington, D.C.: June 21, 2011. Public Health Preparedness: Developing and Acquiring Medical Countermeasures Against Chemical, Biological, Radiological, and Nuclear Agents. GAO-11-567T. Washington, D.C.: April 13, 2011. Measuring Disaster Preparedness: FEMA Has Made Limited Progress in Assessing National Capabilities. GAO-11-260T. Washington, D.C.: March 17, 2011. Biosurveillance: Efforts to Develop a National BioSurveillance Capability Need a National Strategy and a Designated Leader. GAO-10-645. Washington, D.C.: June 30, 2010. Homeland Defense: DOD Can Enhance Efforts to Identify Capabilities to Support Civil Authorities during Disasters. GAO-10-386. Washington, D.C.: March 30, 2010. Combating Nuclear Terrorism: Actions Needed to Better Prepare to Recover from Possible Attacks Using Radiological or Nuclear Materials. GAO-10-204. Washington, D.C.: January 29, 2010. Biosurveillance: Developing a Collaboration Strategy Is Essential to Fostering Interagency Data and Resource Sharing. GAO-10-171. Washington, D.C.: December 18, 2009. Homeland Defense: Planning, Resourcing, and Training Issues Challenge DOD’s Response to Domestic Chemical, Biological, Radiological, Nuclear, and High-Yield Explosive Incidents. GAO-10-123. Washington, D.C.: October 7, 2009. Homeland Defense: Preliminary Observations on Defense Chemical, Biological, Radiological, Nuclear, and High-Yield Explosives Consequence Management Plans and Preparedness. GAO-09-927T. Washington, D.C.: July 28, 2009. Project BioShield Act: HHS Has Supported Development, Procurement, and Emergency Use of Medical Countermeasures to Address Health Threats. GAO-09-878R. Washington, D.C.: July 24, 2009. Project BioShield: HHS Can Improve Agency Internal Controls for Its New Contracting Authorities. GAO-09-820. Washington, D.C.: July 21, 2009. National Preparedness: FEMA Has Made Progress, but Needs to Complete and Integrate Planning, Exercise, and Assessment Efforts. GAO-09-369. Washington, D.C.: April 30, 2009. Homeland Security: First Responders’ Ability to Detect and Model Hazardous Releases in Urban Areas is Significantly Limited. GAO-08-180. Washington, D.C.: June 27, 2008. Risk Management: Strengthening the Use of Risk Management Principles in Homeland Security. GAO-08-904T. Washington, D.C.: June 25, 2008. Emergency Management: Observations on DHS’s Preparedness for Catastrophic Disasters. GAO-08-868T. Washington, D.C.: June 11, 2008. Highlights of a Forum: Strengthening the Use of Risk Management Principles in Homeland Security. GAO-08-627SP. Washington, D.C.: April 15, 2008. Project BioShield: Actions Needed to Avoid Repeating Past Problems with Procuring New Anthrax Vaccine and Managing the Stockpile of Licensed Vaccine. GAO-08-88. Washington, D.C.: October 23, 2007. Homeland Security: Applying Risk Management Principles to Guide Federal Investments. GAO-07-386T. Washington, D.C.: February 7, 2007. Chemical and Biological Defense: Management Actions Are Needed to Close the Gap Between Army Chemical Unit Preparedness and State National Priorities. GAO-07-143. Washington, D.C.: January 19, 2007. Risk Management: Further Refinements Needed to Assess Risk and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005. Internal Control: Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1. Washington, D.C.: November 1, 1999.
The 2001 anthrax attacks in the United States highlighted the need to develop response plans and capabilities to protect U.S. citizens from chemical, biological, radiological, and nuclear (CBRN) agents. Since 2004, the Department of Homeland Security (DHS) has spent at least $70 million developing more than 20 CBRN risk assessments. GAO was requested to assess, from fiscal year 2004 to the present, the extent to which DHS has used its CBRN risk assessments to inform CBRN response plans and CBRN capabilities, and has institutionalized their use. GAO examined relevant laws, Homeland Security Presidential Directives, an Executive Order, DHS guidance, and all 12 relevant interagency CBRN response plans developed by DHS. Based on a review of a United States governmentwide CBRN database and DHS interviews, among other things, GAO selected a nongeneralizable set of seven DHS capabilities used specifically for detecting or responding to CBRN incidents to identify examples of DHS’s use of its CBRN risk assessments. GAO also interviewed relevant DHS officials. This is a public version of a classified report that GAO issued in October 2011. Information DHS deemed sensitive or classified has been redacted. Since 2004, DHS’s use of its CBRN risk assessments to inform its CBRN response plans has varied, from directly influencing information in the plans to not being used at all. DHS guidance states that response planning and resource decisions should be informed by risk information. GAO’s analysis showed that DHS used its CBRN risk assessments to directly inform 2 of 12 CBRN response plans GAO identified because planners considered the risk assessments to be more accurate than earlier DHS planning assumptions. For another 7 of the 12 plans, DHS officials said that the assessments indirectly informed the plans by providing background information prior to plan development. However, GAO could not independently verify this because DHS officials could not document how the risk assessments influenced the information contained in the plans. GAO’s analysis found general consistency between the risk assessments and the plans. For the remaining 3 plans, DHS officials did not use the risk assessments to inform the plans; for 2 of the 3 plans DHS officials told GAO they were not aware of the assessments. DHS officials also noted that there was no departmental guidance on when or how the CBRN risk assessments should be used when developing such plans. Since 2004, DHS’s use of its CBRN risk assessments to inform its CBRN-specific capabilities has varied, from directly impacting its capabilities to not being used at all. Of the 7 capabilities GAO reviewed, one was directly informed by the risk assessments; DHS used its biological agent risk assessments to confirm that its BioWatch program could generally detect the biological agents posing the greatest risk. For 5 of the 7 capabilities, DHS officials said they used the risk assessments along with other information sources to partially inform these capabilities. For example, DHS used its chemical agent risk assessments to determine whether its chemical detectors and the risk assessments were generally aligned for the highest risk agents. For 3 of these 5 capabilities, GAO could not independently verify that they were partially informed by the risk assessments because DHS officials could not document how the risk assessments influenced the capabilities. DHS did not use its CBRN risk assessments to inform the remaining CBRN capability because the assessments were not needed to meet the capability’s mission. DHS and its components do not have written procedures to institutionalize their use of DHS’s CBRN risk assessments for CBRN response planning and capability investment decisions. Standards for internal control in the federal government call for written procedures to better ensure management’s directives are enforced. DHS does not have procedures that stipulate when and how DHS officials should use the department’s CBRN risk assessments to inform CBRN response planning and capability investment decisions, and GAO found variation in the extent to which they were used. DHS officials agree with GAO that without written procedures, the consistent use of the department’s CBRN risk assessments by DHS officials may not be ensured beyond the tenure of any given agency official. DHS could better help to ensure that its CBRN response plans and capabilities are consistently informed by the department’s CBRN risk assessments by establishing written procedures detailing when and how DHS officials should consider using the risk assessments to inform their activities. GAO recommends that DHS develop more specific guidance, including written procedures, that details when and how DHS components should consider using the department’s CBRN risk assessments to inform related response planning efforts and capability investment decision making. DHS agreed with the recommendation.
Strategic human capital management is a pervasive challenge facing the federal government. In January 2001, and again in January 2003, we identified strategic human capital management as a governmentwide high- risk area after finding that the lack of attention to strategic human capital planning had created a risk to the federal government’s ability to serve the American people effectively. As our previous reports have made clear, the widespread lack of attention to strategic human capital management in the past has created a fundamental weakness in the federal government’s ability to perform its missions economically, efficiently, and effectively. In the wake of extensive downsizing during the early 1990s, done largely without sufficient consideration of the strategic consequences, agencies are experiencing significant challenges to deploying the right skills, in the right places, at the right time. Agencies are also facing a growing number of employees who are eligible for retirement and are finding it difficult to fill certain mission-critical jobs, a situation that could significantly drain agencies’ institutional knowledge. Other factors such as emerging security threats, rapidly evolving technology, and dramatic shifts in the age and composition of the overall population exacerbate the problem. Such factors increase the need for agencies to engage in strategic workforce planning to transform their workforces so that they will be effective in the 21st century. There are a variety of models of how federal agencies can conduct workforce planning. For example, in 1999 OPM published a five-step model that suggests agencies define their strategic direction, assess their current and future workforces, and develop and implement action plans for closing identified gaps in future workforce needs. Since then, NAPA and the International Personnel Management Association (IPMA) have reported on workforce models used by federal, state, and local governments and industry, and developed their own generic models. Comparing these models, NAPA and IPMA found that the following four steps are generally common to strategic workforce planning efforts: examining future organizational, environmental, and other issues that may affect the agency’s ability to attain its strategic goals; determining skills and competencies needed in the future workforce to meet the organization’s goals and identifying gaps in skills and competencies that an organization needs to address; selecting and implementing human capital strategies that are targeted toward addressing these gaps and issues; and evaluating the success of the human capital strategies. However, they also reported that federal agencies often implement these steps differently and focus on a variety of issues based on their particular circumstances when preparing their strategic workforce plans. For example, faced with a long lead time to train employees hired to replace those retiring and an increasing workload, SSA focuses a large part of its workforce planning effort on estimating and managing retirements. Unlike SSA, PBGC officials faced a future workload that could rise or fall sharply. Consequently, PBGC focused its November 2002 workforce plan on identifying skills to manage the combined efforts of federal staff and contractors to address a volatile workload. Planning, developing, and implementing workforce planning strategies, such as those that involve reshaping the current workforce through early separations, managed attrition, or increased hiring, can cause significant changes in how an agency implements its policies and programs. Our work on the human capital experiences of leading organizations as well as organizations that are undergoing major mergers and transformations has identified numerous lessons that can help federal agencies successfully implement strategic workforce planning strategies. These lessons include the following: Ensuring that top management sets the overall direction and goals of workforce planning. Top leadership that is clearly and personally involved in strategic workforce planning provides the organizational vision that is important in times of change; can help provide stability as the workforce plan is being developed and implemented; and provides a cadre of champions within the agency, including both political and career executives, to ensure that planning strategies are thoroughly implemented and sustained over time. It can also help integrate workforce planning efforts with other key management planning efforts, such as succession planning and information technology or financial management reforms, to ensure that such initiatives work together to achieve the agency’s goals. For example, we have reported that to be effective, succession planning needs the support and commitment of an organization’s top leadership. In other countries, government agencies’ top leadership (1) actively participates in the succession planning and management programs; (2) regularly uses these programs to develop, place, and promote individuals; and (3) ensures that these programs receive sufficient financial and staff resources and are maintained over time. Involve employees and other stakeholders in developing and implementing future workforce strategies. Agency managers, supervisors, employees, and employee unions need to work together to ensure that the entire agency understands the need for and benefits of changes described in the strategic workforce plan so that the agency can develop clear and transparent policies and procedures to implement the plan’s human capital strategies. Involving employees and other stakeholders on strategic workforce planning teams can develop new synergies that identify ways to streamline processes and improve human capital strategies and help the agency recognize and deal with the potential impact that the organization’s culture—the underlying assumptions, beliefs, values, attitudes, and expectations generally shared by an organization’s members—can have on the implementation of such improvements. Changes that recognize how they may challenge the existing culture, and include appropriate steps to deal with potential problems, are more likely to succeed than strategies that do not. Establish a communication strategy to create shared expectations, promote transparency, and report progress. A communication strategy is especially crucial in the public sector where a full range of stakeholders and interested parties are concerned not only with what human capital and programmatic results will be achieved by a plan, but also with the processes that are to be used to achieve those results. For example, if a workforce plan calls for employing strategies that have not been extensively used before, such as recruitment bonuses, employees may be concerned about whether the processes will be followed consistently and fairly. In general, communication about the goals, approach, and results of strategic workforce planning is most effective when done early, clearly, and often and is downward, upward, and lateral. Figure 2 describes how PBGC adopted several of these lessons during its recent workforce planning efforts. It is essential that agencies determine the skills and competencies that are critical to successfully achieving their missions and goals. This is especially important as changes in national security, technology, budget constraints, and other factors change the environment within which federal agencies operate. For example, as discussed in our July 2003 report on the Department of Homeland Security’s (DHS) international cargo container programs, DHS Customs officials have developed two new programs for increasing the security of such cargo that require recruiting and training about 270 staff to work with their foreign counterparts at more than 40 international ports and international shipping companies. To fully implement the new security programs, DHS expects to recruit and train candidates with diplomatic, language, and risk assessment (targeting) skills for 2- to 3-year permanent assignments at foreign ports. We reported that because some of these ports are in countries that our government considers hardship assignments, DHS faces a daunting challenge in attracting U.S. personnel with the necessary skills for these assignments. We recommended, among other improvements, that DHS develop human capital plans that clearly describe how the cargo security programs will meet the programs’ long-term demands for skilled staff. DHS officials agreed to develop human capital plans to better ensure the programs’ long- term success. We have reported on similar human capital challenges at other agencies. For example, in June 2003, we testified that the Securities and Exchange Commission (SEC) had failed to fill most of the new staff positions it needed to examine recent high-profile corporate failures and accounting scandals. In our June 2002 report on the Federal Energy Regulatory Commission (FERC), we stated that the increasing competitive nature of the natural gas and electricity markets made it critical that FERC have more staff members knowledgeable about how the energy markets work and how to regulate these markets effectively. However, FERC did not have a strategic human capital management plan to guide its efforts to transform its workforce and had not taken full advantage of the personnel flexibilities and tools available to federal agencies in addressing its human capital challenges. In April 2002, we found that the individual federal trade agencies responsible for negotiating, monitoring, and enforcing U.S. trade agreements lacked sufficient staff members with the expertise to perform the necessary economic, technical, and legal analyses for the new agreements. The agencies collectively did not have sufficient expertise to adequately complete these analyses and faced problems with recruitment and high turnover rates. The scope of agencies’ efforts to identify the skills and competencies needed for their future workforces varies considerably, depending on the needs and interests of a particular agency. Whereas some agencies may decide to define all the skills and competencies needed to achieve their strategic goals, others may elect to focus their analysis on only those most critical to achieving their goals. The most important consideration is that the skills and competencies identified are clearly linked to the agency’s mission and long-term goals developed jointly with key congressional and other stakeholders during the strategic planning process. If an agency identifies staff needs without linking the needs to strategic goals, or if the agency has not obtained agreement from key stakeholders on the goals, the needs assessment may be incomplete and premature. Agencies can use various approaches for making a fact-based determination of the critical human capital skills and competencies needed for the future. For example, PBGC collected qualitative information from interviews with agency executives and managers on the factors influencing the agency’s capability to acquire, develop, and retain critical skills and competencies. Another approach, used by the Department of the Army, is to collect extensive information from employee surveys on education, training, and other factors that may influence employees’ skills. Information on attrition rates and projected retirement rates, fluctuations in workload, and geographic and demographic trends can also be useful. When estimating the number of employees needed with specific skills and competencies, it is also important to consider opportunities for reshaping the workforce by reengineering current work processes, sharing work among offices within the agency and with other agencies that have similar missions, and competitive sourcing. (See fig. 3 for information on NHGRI’s approach for determining critical skills and competencies needed to achieve its strategic goals.) Scenario planning is an approach that agencies have used to manage risks of planning for future human capital needs in a changing environment. As discussed in our April 2003 report on agencies’ efforts to integrate human capital strategies with their mission-oriented efforts, scenarios can describe different future environments that agencies may face. For example, after the terrorist attacks of September 11, 2001, and during the creation and implementation of DHS, senior U.S. Coast Guard officials reexamined five long-term scenarios developed in 1999 to describe different environments that could exist in the year 2020. In 1999, these scenarios had been the basis for agency leaders and planners to create operational and human capital strategies that they thought would work well for the U.S. Coast Guard in each independent scenario. After September 11, 2001, agency officials reviewed the scenarios to determine whether additional scenarios were needed in light of the attacks and decided to (1) create new long-term scenarios to guide planning beyond 2005 and (2) generate two scenarios with an 18-month horizon to guide short-term operational and human capital planning. Similarly, to prepare its 2002 strategic workforce plan, PBGC used scenario analysis to determine how the scope and volume of its activities might change in the next 5 years. The strategic workforce plans these organizations developed identify gaps in workforce skills or competencies that they need to fill to meet the likely scenarios rather than planning to meet the needs of a single view of the future. U.S. Coast Guard and PBGC managers believe that by using multiple scenarios they gain flexibility in determining future workforce requirements. Our March 2002 strategic human capital model stressed the importance of agencies developing human capital strategies—the programs, policies, and processes that agencies use to build and manage their workforces—that are tailored to their unique needs. Applying this to strategic workforce planning means that agencies (1) develop hiring, training, staff development, succession planning, performance management, use of flexibilities, and other human capital strategies and tools that can be implemented with the resources that can be reasonably expected to be available and (2) consider how these strategies can be aligned to eliminate gaps and improve the contribution of critical skills and competencies that they have identified between the future and current skills and competencies needed for mission success. For example, we reported that to manage the succession of their executives and other key employees, agencies in Australia, Canada, New Zealand, and the United Kingdom are implementing succession planning and management practices that protect and enhance organizational capacity. Specifically, their initiatives identify high-potential employees from multiple organizational levels early in their careers as well as identify and develop successors for employees with critical knowledge and skills. In addition, because they are facing challenges in the demographic makeup and diversity of their senior executives, agencies in other countries use succession planning and management to achieve a more diverse workforce, maintain their leadership capacity as their senior executives retire, and increase the retention of high-potential staff. Also, in June 2003, we testified that although the Federal Bureau of Investigation (FBI) has taken some steps to address short-term human capital needs related to implementing its changed priorities, as well as completing a framework for a revised strategic plan, it has not completed a strategic human capital plan. We observed that the FBI should build a more long-term approach to human capital by completing a strategic human capital plan that outlines, among other things, the results of a data- driven assessment of its needs for critical skills and competencies. Such an analysis could become the basis for FBI officials deciding how to maximize the use of available human capital flexibilities as a strategy for recruiting and retaining agents with critical skills, intelligence analysts, and other critically needed staff. Our 2002 strategic human capital model identifies aspects of human capital management that enable agencies to maximize their employees’ contributions, such as (1) the continuing attention of senior leaders and managers to valuing and investing in their employees; (2) an investment in human capital approaches that acquires, develops, and retains the best employees; and (3) the use of performance management systems that elicit the best results-oriented performance from the staff, and indicators to measure the effectiveness of human capital approaches. Before beginning to develop specific workforce strategies, an agency can assess these aspects of its human capital approach, using OPM’s Human Capital Assessment and Accountability Framework, which OPM developed in conjunction with the Office of Management and Budget (OMB) and GAO; our model; and other tools. The results will help agencies develop a sense of the obstacles and opportunities that may occur in meeting their critical workforce needs For example, an agency that attempts to develop creative and innovative strategies will have a difficult time implementing the strategies if its assessment concludes that its overall human capital approach (1) does not effectively value people as assets whose value can be enhanced and (2) is not results oriented. Much of the authority that agencies’ leaders need to tailor human capital strategies to their unique needs is already available under current laws and regulations. Therefore, in setting goals for its human capital program and developing the tailored workforce planning strategies to achieve these goals, it is important for agencies to identify and make use of all the appropriate administrative authorities to build and maintain the workforce needed for the future. As our December 2002 report states, this will involve agencies reexamining the flexibilities provided to them under current authorities, and identifying existing flexibilities that they could use more extensively, to develop workforce planning strategies. These flexibilities may include providing early separation and early retirement incentives authorized by the Homeland Security Act of 2002, recruitment and retention bonuses and allowances, alternative work schedules, and special hiring authorities to recruit employees with critical skills. (See fig. 4 for information on DOL’s use of flexibilities to recruit individuals with business skills.) In a December 2002 report, we identified key practices that agencies need to employ to effectively take advantage of existing and new human capital authorities. Two of these practices—ensuring that the use of flexibilities is part of an overall human capital strategy and ensuring stakeholder input in developing flexibilities-related policies and procedures—are intrinsic to effective workforce planning and have already been discussed. However, as agencies plan how to implement specific workforce strategies that include flexibilities, it is important that they also consider other practices that are important to the effective use of flexibilities. These include the following: Educate managers and employees on the availability and use of flexibilities. Managers and supervisors can be more effective in using human capital strategies that involve new flexibilities, such as recruitment bonuses, if they are properly trained to identify when they can be used and how to use the agency’s processes for ensuring consistency, equity, and transparency. To avoid confusion and misunderstandings, it is also important to educate employees about how the agency uses human capital flexibilities and employee rights under policies and procedures related to human capital. Streamline and improve administrative processes. It is important that agencies streamline administrative processes for using flexibilities and review self-imposed constraints that may be excessively process oriented. Although sufficient controls are important to ensure consistency and fairness, agency officials developing a workforce strategy that uses flexibilities should look for instances in which processes can be reengineered. Build transparency and accountability into the system. Clear and transparent guidelines for using specific flexibilities, and holding managers and supervisors accountable for their fair and effective use, are essential to successfully implementing workforce strategies. Guidelines can be used to (1) provide well-defined and documented decision-making criteria for using flexibilities and help ensure that they are consistently applied and (2) minimize managers’ and supervisors’ potential reluctance to use flexibilities by addressing their concerns that without guidelines, employees may see them as unfairly applying the flexibilities. An agency can also use a results-oriented performance management system to reinforce managers’ accountability for implementing human capital strategies. In October 2000, OPM amended regulations to require agencies to, among other things, appraise executive performance by balancing organizational results with areas such as employee perspective. We reported on selected agencies’ implementation of a set of balanced performance expectations for senior executives and identified examples of executives’ expectations. Examples of these performance expectations were to “help attract and retain well-qualified employees” and “ensure workforce has skills aligned with the agency’s objectives.” (See fig. 5 for information on GSA Region 3’s efforts to build the capacity to support its workforce strategies.) High-performing organizations recognize the fundamental importance of measuring both the outcomes of human capital strategies and how these outcomes have helped the organizations accomplish their missions and programmatic goals. Performance measures, appropriately designed, can be used to gauge two types of success: (1) progress toward reaching human capital goals and (2) the contribution of human capital activities toward achieving programmatic goals. Identifying both types of measures, and discussing how the agency will use these measures to evaluate the strategies before it starts to implement the strategies, helps agency officials think through the scope, timing, and possible barriers to evaluating the workforce plan. Periodic measurement of an agency’s progress toward human capital goals and the extent that human capital activities contributed to achieving programmatic goals provides information for effective oversight by identifying performance shortfalls and appropriate corrective actions. For example, a workforce plan can include measures that indicate whether the agency executed its hiring, training, or retention strategies as intended and achieved the goals for these strategies, and how these initiatives changed the workforce’s skills and competencies. It can also include additional measures that address whether the agency achieved its program goals and the link between human capital and program results. An agency’s evaluation of its progress implementing human capital strategies would use the first set of measures to determine if the agency met its human capital goals and identify the reasons for any shortfalls, such as whether the agency’s implementation plan adequately considered possible barriers to achieving the goals, established effective checkpoints to allow necessary adjustments to the strategy, and assigned people with sufficient authority and resources. Further evaluation may determine that although the agency achieved its workforce goals, its human capital efforts neither significantly helped nor hindered the agency from reaching its programmatic goals. This could occur if an agency misjudged the relationship between human capital and programmatic goals when developing workforce plans and consequently has mistakenly estimated the magnitude of changes in human capital strategies that were needed to achieve program goals. These results could lead to the agency revising its human capital goals to better reflect their relationship to programmatic goals, redesigning programmatic strategies, and possibly shifting resources among human capital initiatives during the next planning cycle. Developing meaningful outcome-oriented performance goals and collecting performance data to measure achievement of these goals is a major challenge for many federal agencies. Performance measurement tends to focus on regularly collected data available on direct products and services provided by a program, such as the number of staff trained to carry out an activity. In cases where outcomes are not quickly achieved or readily observed, such as assessing the impact a training program has on achieving an agency’s goals, performance measurement is more complex. Federal agencies in general have experienced difficulties in defining practical and meaningful measures that assess the impact human capital strategies have on programmatic results. For example, in its fiscal year 2003 performance plan, the Federal Emergency Management Agency identified goals of streamlining its organization and developing its workforce, but listed no measures to gauge progress for either goal. In contrast, the Environmental Protection Agency’s (EPA) fiscal year 2003 performance plan includes measures of the agency’s efforts to achieve activity-oriented human capital goals, such as implementing a workforce planning model at five offices by the end of the year and completing a comprehensive pay review. These performance measures provide a base upon which EPA can seek to gauge how well its human capital efforts help the agency to achieve its programmatic goals. The challenge faced by EPA and other agencies in using such measures is that there is not always a clear link between specific human capital strategies and strategic programmatic outcomes. This is partly because there may be multiple causes of a specific outcome, only one of which is related to a targeted human capital strategy, and unforeseen circumstances that affect implementation of a strategy. Our recent testimony on human capital challenges at the SEC and key trade agencies illustrates the practical difficulties that agencies may encounter. We testified that during 2001, U.S. trade agencies increased staff levels to address insufficient monitoring and enforcement of trade agreements. However, we noted that measuring the effectiveness of this strategy might be difficult because the agencies’ workloads in other areas continue to grow, which could cause them to shift resources intended for trade compliance to other program areas. If shifts in resources occur, the agencies may not be able to improve the effectiveness of trade compliance efforts, even though the human capital strategy initially succeeded in acquiring additional resources. OPM’s Human Capital Assessment and Accountability Framework, developed in conjunction with OMB and GAO, presents consolidated guidance on standards for success and performance indicators that agencies can refer to as they transform their strategic human capital management programs. For example, it includes such strategic workforce planning indicators as whether agencies use best practices to determine workloads and resource needs and have documented strategies for workforce planning that define roles, responsibilities, and other requirements of the strategies. As we stated in January 2003, the framework represents a promising step that can improve agencies’ human capital systems. Agencies can use its indicators as a basis for developing, implementing, and evaluating their workforce planning processes. Generally, agencies will need more specific indicators to measure the success of their workforce plans. (See fig. 6 for information on SSA’s evaluation of retirement-related workforce strategies.) There is an increasing awareness that federal agencies need to transform themselves into more efficient, results-oriented organizations if they are to meet the many fiscal, management, and policy challenges of the 21st century. To meet these challenges, federal managers will need to direct considerable time, energy, and targeted investments toward efforts that make the best use of the government’s most important resource—the people that agencies employ now and in the future. They will also need effective strategic workforce planning to identify and focus these investments on the long-term human capital issues that most affect their ability to attain mission results. The principles presented here can enhance the effectiveness of an agency’s strategic workforce planning by helping the agency focus on the issues it needs to address, the information it needs to consider, and the lessons that it can learn from other organizations’ experiences. By doing so, agencies can better ensure that their strategic workforce planning processes appropriately address the human capital challenges of the future and better contribute to the agencies’ major efforts to meet their missions and goals. We provided a draft of this report to the Secretary of Labor, the Executive Director of PBGC, the Director of NIH, the Administrator of GSA, and the Commissioner of SSA. Each of these organizations provided comments on the draft report and agreed with the information presented. DOL’s Director of Workforce Planning and Diversity; PBGC’s Chief Human Capital Officer; and GSA’s Program Management Officer, Office of the Chief People Officer, also provided written technical comments to clarify specific points regarding the information presented. Where appropriate, we have made changes to reflect those technical comments. NIH and SSA officials did not provide technical comments. In addition to technical comments, GSA noted that while the report presents a case study on workforce planning efforts of one region, GSA has used and continues to use a robust agencywide workforce planning process. We have clarified the report to recognize that our example is limited to the activities of one GSA region and does not address the agency’s overall workforce planning efforts. We are sending copies of this report to other interested congressional parties, the Director of OPM, the Secretary of Labor, the Secretary of Health and Human Services, the Commissioner of the Social Security Administration, the Director of the National Institutes of Health, and the Executive Director of the Pension Benefit Guaranty Corporation. In addition, we will make copies available to others upon request. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me or William Doherty on (202) 512-6806. Others who contributed to this report were Bob Lilly, Adam Hoffman, Andrew Edelson, and Candyce Mitchell. To identify strategic workforce planning principles and illustrative agency examples, we gathered and analyzed information from a variety of sources. We reviewed our own guidance, reports, and testimonies on federal agencies’ workforce planning and human capital management efforts, and guidance available through the Internet and leading human capital periodicals, such as the Workforce Planning Resource Guide for Public Sector Human Resource Professionals issued by the International Personnel Management Association. We also met with officials from organizations with governmentwide responsibilities for or expertise in workforce planning, such as the Office of Personnel Management and the National Academy of Public Administration, to identify additional guidance available and to obtain their recommendations of federal agencies engaged in effective workforce planning. We synthesized information from these meetings, reports, and guidance documents and our own experiences in human capital management to (1) derive five principles that appeared most important to effective strategic workforce planning and (2) identify agencies we would contact for examples of workforce planning that illustrated these principles. We then selected five examples of agencies’ workforce planning activities (one example corresponding to each of the five workforce planning principles) to present in the report. We met with human capital and program officials and analyzed documents related to these examples to more fully understand the specific workforce planning issues associated with the examples and how the agencies addressed these issues. We selected the examples that in our judgment collectively illustrated these principles across a diverse set of federal programs. Because our review objectives did not include evaluating the effectiveness of agencies' workforce planning processes, we did not evaluate these processes nor did we require the presence of evaluations or other evidence demonstrating planning effectiveness as a criterion for selecting examples. We did exclude from consideration, however, processes that agencies were just beginning or that were not complete enough for agencies to be willing to present them as successful planning efforts. The fact that an agency is profiled to illustrate the principles of a particular planning step is not meant to imply complete success for addressing the matter or lack of success for addressing other aspects of workforce planning. Furthermore, the efforts in the examples do not represent all the potential ways that an agency can implement workforce planning or address the specific human capital issue being discussed. We conducted our work in Washington, D.C., from March 2002 through October 2003, in accordance with generally accepted government auditing standards. Foreign Assistance: USAID Needs to Improve Its Workforce Planning and Operating Expense Accounting. GAO-03-1171T. Washington, D.C.: September 23, 2003. Human Capital: Insights for U.S. Agencies from Other Countries’ Succession Planning and Management Initiatives. GAO-03-914. Washington, D.C.: September 15, 2003. DOD Personnel: Documentation of the Army's Civilian Workforce- Planning Model Needed to Enhance Credibility. GAO-03-1046. Washington, D.C.: August 22, 2003. Foreign Assistance: Strategic Workforce Planning Can Help USAID Address Current and Future Challenges. GAO-03-946. Washington, D.C.: August 22, 2003. Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. GAO-03-669. Washington, D.C.: July 2, 2003. Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government. GAO-03-893G. Washington, D.C.: July 1, 2003. Tax Administration: Workforce Planning Needs Further Development for IRS's Taxpayer Education and Communication Unit. GAO-03-711. Washington, D.C.: May 30, 2003. Federal Procurement: Spending and Workforce Trends. GAO-03-443. Washington, D.C.: April 30, 2003. Veterans Benefits Administration: Better Collection and Analysis of Attrition Data Needed to Enhance Workforce Planning. GAO-03-491. Washington, D.C.: April 28, 2003. Human Capital: Selected Agency Actions to Integrate Human Capital Approaches to Attain Mission Results. GAO-03-446. Washington, D.C.: April 11, 2003. Results-Oriented Cultures: Creating a Clear Linkage between Individual Performance and Organizational Success. GAO-03-488. Washington, D.C.: March 14, 2003. Human Capital Management: FAA’s Reform Effort Requires a More Strategic Approach. GAO-03-156. Washington, D.C.: February 3, 2003. High-Risk Series: Strategic Human Capital Management. GAO-03-120. Washington, D.C.: January 2003. Major Management Challenges and Program Risks: Office of Personnel Management. GAO-03-115. Washington, D.C.: January 2003. Acquisition Workforce: Status of Agency Efforts to Address Future Needs. GAO-03-55. Washington, D.C.: December 18, 2002. Human Capital: Effective Use of Flexibilities Can Assist Agencies in Managing Their Workforces. GAO-03-2. Washington, D.C.: December 6, 2002. Military Personnel: Oversight Process Needed to Help Maintain Momentum of DOD’s Strategic Human Capital Planning. GAO-03-237. Washington, D.C.: December 5, 2002. Human Capital Legislative Proposals to NASA’s Fiscal Year 2003 Authorization Bill. GAO-03-264R. Washington, D.C.: November 15, 2002. Highlights of a GAO Forum: Mergers and Transformation: Lessons Learned for a Department of Homeland Security and Other Federal Agencies. GAO-03-293SP. Washington, D.C.: November 14, 2002. Highlights of a GAO Roundtable: The Chief Operating Officer Concept: A Potential Strategy to Address Federal Governance Challenges. GAO-03- 192SP. Washington, D.C.: October 4, 2002. Results-Oriented Cultures: Using Balanced Expectations to Manage Senior Executive Performance. GAO-02-966. Washington, D.C.: September 27, 2002. Human Capital Flexibilities. GAO-02-1050R. Washington, D.C.: August 9, 2002. Results-Oriented Cultures: Insights for U.S. Agencies from Other Countries’ Performance Management Initiatives. GAO-02-862. Washington, D.C.: August 2, 2002. HUD Human Capital Management: Comprehensive Strategic Workforce Planning Needed. GAO-02-839. Washington, D.C.: July 24, 2002. NASA Management Challenges: Human Capital and Other Critical Areas Need to Be Addressed. GAO-02-945T. Washington, D.C.: July 18, 2002. Managing for Results: Using Strategic Human Capital Management to Drive Transformational Change. GAO-02-940T. Washington, D.C.: July 15, 2002. Post-Hearing Questions Related to Federal Human Capital Issues. GAO- 02-719R. Washington, D.C.: May 10, 2002. Human Capital: Major Human Capital Challenges at SEC and Key Trade Agencies. GAO-02-662T. Washington, D.C.: April 23, 2002. Managing for Results: Building on the Momentum for Strategic Human Capital Reform. GAO-02-528T. Washington, D.C.: March 18, 2002. A Model of Strategic Human Capital Management. GAO-02-373SP. Washington, D.C.: March 15, 2002. Foreign Languages: Human Capital Approach Needed to Correct Staffing and Proficiency Shortfalls. GAO-02-375. Washington, D.C.: January 31, 2002. Human Capital: Attracting and Retaining a High-Quality Information Technology Workforce. GAO-02-113T. Washington, D.C.: October 4, 2001. Securities and Exchange Commission: Human Capital Challenges Require Management Attention. GAO-01-947. Washington, D.C.: September 17, 2001. Human Capital: Practices That Empowered and Involved Employees. GAO-01-1070. Washington, D.C.: September 14, 2001. Human Capital: Building the Information Technology Workforce to Achieve Results. GAO-01-1007T. Washington, D.C.: July 31, 2001. Human Capital: Implementing an Effective Workforce Strategy Would Help EPA to Achieve Its Strategic Goals. GAO-01-812. Washington, D.C.: July 31, 2001. Single-Family Housing: Better Strategic Human Capital Management Needed at HUD’s Homeownership Centers. GAO-01-590. Washington, D.C.: July 26, 2001. Human Capital: Taking Steps to Meet Current and Emerging Human Capital Challenges. GAO-01-965T. Washington, D.C.: July 17, 2001. Office of Personnel Management: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-884. Washington, D.C.: July 9, 2001. Managing for Results: Human Capital Management Discussions in Fiscal Year 2001 Performance Plans. GAO-01-236. Washington, D.C.: April 24, 2001. Human Capital: Major Human Capital Challenges at the Departments of Defense and State. GAO-01-565T. Washington, D.C.: March 29, 2001. Human Capital: Meeting the Governmentwide High-Risk Challenge. GAO-01-357T. Washington, D.C.: February 1, 2001. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The federal government is in a period of profound transition and faces an array of challenges and opportunities to enhance performance, ensure accountability, and position the nation for the future. Effective results-oriented management of the government's most valued resource--its people--is at the heart of this transition. This report is part of a large body of GAO work examining issues in strategic human capital management. Based on GAO's reports and testimonies, review of studies by leading workforce planning organizations, and interviews with officials from the Office of Personnel Management and other federal agencies, this report describes the key principles of strategic workforce planning and provides illustrative examples of these principles drawn from selected agencies' strategic workforce planning experiences. Strategic workforce planning addresses two critical needs: (1) aligning an organization's human capital program with its current and emerging mission and programmatic goals and (2) developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. While agencies' approaches to workforce planning will vary, GAO identified five key principles that strategic workforce planning should address irrespective of the context in which the planning is done: (1) involve top management, employees, and other stakeholders in developing, communicating, and implementing the strategic workforce plan, (2) determine the critical skills and competencies that will be needed to achieve current and future programmatic results, (3) develop strategies that are tailored to address gaps in number, deployment, and alignment of human capital approaches for enabling and sustaining the contributions of all critical skills and competencies, (4) build the capability needed to address administrative, educational, and other requirements important to support workforce planning strategies, and (5) monitor and evaluate the agency's progress toward its human capital goals and the contribution that human capital results have made toward achieving programmatic results.
OPS is responsible for the safety oversight of NG and HL pipelines and LNG storage facilities. OPS operations are primarily funded from user fees assessed to approximately 750 pipeline and storage facility operators,with additional funding provided by the Oil Spill Liability Trust Fund (OSLTF). In addition, Congress has partially funded OPS operations by making a permanent reduction in the accumulated PSF balance carried over from the prior years. User fees were first assessed to operators for fiscal years 1986 and 1987. User fees collected during those years were accumulated in the PSF to establish a beginning balance in the fund. For fiscal years 1986 and 1987, pipeline safety operations continued to be funded by general revenue appropriations. Beginning in fiscal year 1988, OPS operations were no longer funded by general revenues but instead were funded primarily by user fee assessments, which are billed after the fiscal year starts. For each fiscal year from 1988 forward, the accumulated balance in the PSF has been used to temporarily fund operations until the user fees are collected. As indicated below, the annual appropriation prescribes funding levels and sources of funds for OPS operations. Therefore, the amount of the total fiscal year user fee assessment can only be determined after the appropriation is enacted due to the uncertainty of the components that constitute the appropriation and the total appropriation amount. For fiscal year 2000, OPS’ appropriation was funded from the sources listed in table 1. As discussed later, the actual amount of user fees charged is adjusted for a number of reasons, such as to provide a RSPA administrative support charge and to compensate for the over or under collection of prior year fees. Once the total fiscal year user fee assessment is determined, it is divided into three pools representing the three types of operators.Individual operator assessments are then calculated based either on pipeline mileage or storage capacity data maintained by OPS. After individual operator assessments are determined, OPS can begin billing operators. Our objectives were to determine (1) how RSPA’s analysis determined the required minimum reserve fund balance, (2) if the analysis was accurately prepared based on RSPA/OPS financial records and Treasury reports, (3) how OPS’ billing and collection cycles function, and (4) if changes in the way OPS assesses user fees and collects cash would result in a more efficient use of user fees. To determine how RSPA calculated the required reserve fund balance, we conducted interviews with OPS officials who prepared and reviewed the analysis. We also obtained an understanding of how the analysis conclusion is linked to the analysis detail and identified assumptions made in the analysis. To determine whether the analysis was accurate, we conducted interviews with OPS and RSPA officials who prepared and reviewed the analysis, identified and assessed the reasonableness of assumptions made, compared data presented in the analysis to data in RSPA’s financial systems and Treasury reports, and performed some recalculations of data. We did not perform any audit or review procedures that would allow us to attest to the accuracy of the historical data presented in the analysis. To determine how OPS’ billing and collection cycles function, we obtained an understanding of those cycles as they pertain to the PSF through interviews with OPS officials and the review of OPS documentation. Finally, to determine whether improvements could be made to OPS’ billing and collection cycles to support a more efficient use of user fees, we identified and discussed alternatives with OPS officials. We received written comments on a draft of this report from the Department of Transportation. We also received several technical comments, which we incorporated as appropriate. A copy of DOT’s response is reprinted in appendix I. We conducted our review from November 2000 through March 2001 in accordance with U.S. generally accepted government auditing standards. Significant flaws in RSPA’s financial analysis used to determine the estimated minimum balance for the PSF make the estimate unreasonable. Under current practices, the year-end balance in the PSF is used to fund certain operational expenses pending the receipt of user fee assessments from pipeline and storage facility operators for the following year. In its analysis report, RSPA concluded that at least 36 percent of the enacted appropriation in a given fiscal year should be maintained as a minimum balance in the PSF to cover obligations for the first two quarters of the fiscal year and avoid violation of the Antideficiency Act. However, our review indicated that the analysis was unreasonable due to (1) the use of an inappropriate key assumption, (2) the inappropriate use of a fixed percentage to estimate the minimum balance in the PSF, and (3) RSPA’s use of incorrect or unreliable financial data in performing its calculations. RSPA’s methodology was based on the assumption that the minimum PSF balance at the end of the fiscal year must be sufficient to cover estimated obligations for the first two quarters (October through March) of the following fiscal year. Based on fiscal year 2000 historical data, the analysis projected the estimated future minimum PSF balance as a percentage that was calculated for fiscal year 2000 as follows (dollars in millions): In designing the formula, RSPA staff advised us that they did not consider cash receipts for the first two quarters because they believed that the process of obtaining Treasury warrants, necessary to enter into obligations, would result in the majority of the funds being unavailable for obligation until halfway through the fiscal year. However, through interviews and reviewing warrant documentation, we noted that warrants authorizing the obligation of available balances could be obtained from Treasury in several days. For fiscal year 2000, OPS data showed that $3.6 million of its user fees were received by the end of December 1999, and an additional $23.9 million of fees were received by the end of January 2000. In the RSPA analysis, none of these collections, totaling $27.5 million, were considered available for obligation in the first or second quarter. Per the analysis, total obligations incurred by OPS from October 1999 through January 2000 totaled only $5.2 million, while the beginning balance of the fund at October 1, 1999, was $ 15.9 million. OPS staff’s misunderstanding of the warrant procedures, and hence the failure to consider available user fee collections in the analysis, significantly overstated the calculation of the estimated minimum balance required in the PSF. RSPA’s analysis also incorrectly presumes that a fixed percentage of the user fee assessment base, as calculated using the fiscal year 2000 data, will result in a factor that can be used to calculate the minimum balance for the coming year. However, this assumes a direct and constant relationship between obligations and the user fee assessment base, which, based on RSPA’s own analysis for fiscal years 1998, 1999, and 2000, does not exist. Table 3 below shows that obligations in the first 6 months were a growing percentage of the user fee assessment base during the 3 years analyzed. Absent any such constant relationship, obligations as a percentage of the user fee assessment base cannot be used as a reliable predictor of the minimum balance needed in the PSF. Instead of a fixed percentage, the amount needed in the PSF depends on the timing and amounts of expected obligations and cash collections during the early part of the new fiscal year. The amount of obligations is affected by the level and types of program activities planned. From one year to the next, obligation patterns may change significantly, particularly if significant changes are made in the level and nature of OPS activities. For this reason, there is no assurance that a fixed percentage calculation of the assessment base, enacted appropriations, or any other base would generate an appropriate carryover balance. Using hypothetical data, figure 1 below demonstrates that a comparison between expected cumulative PSF obligations and expected cumulative cash collections will identify the maximum expected shortfall in the early part of the fiscal year. In this figure, obligations are assumed to start at the beginning of the year (time A) and cash collections some time later (time B). The shaded area shows the time during which cumulative year to date obligations exceed cumulative year to date cash collections. The widest point (time D) identifies the minimum beginning fund balance necessary in the PSF. In general, the later that fees are collected the larger the needed balance. At time E, cumulative cash collections equal cumulative obligations and the current year’s shortage is eliminated. In order to ensure that the estimated minimum balance as calculated in this manner is adequate to cover the shortfall, this type of analysis would need to be completed each year. This annual reestimate, which could be adjusted to cover possible contingencies, would be particularly important given the fluctuations in levels of obligations that have occurred early in the year over the past several fiscal years. Notwithstanding the previously noted flaws in its approach, certain data included in RSPA’s analysis were incorrect and/or unreliable. For example, as permitted by law, OPS assessed additional fees of approximately $0.9 million to pipeline operators, but these fees were omitted from the analysis. Using RSPA’s data, we estimated that the omission of these additional fees from the analysis further overstated the minimum PSF balance. RSPA also included in its analysis historical data, such as user fee cash receipts, and obligations that did not agree with either data in RSPA’s accounting system or other documentation, such as reports prepared for Treasury. For example, the cash receipts data for the first two quarters of fiscal year 2000 that were included in the analysis, were taken from a database that RSPA accounting provides OPS to account for assessments receivable. It was approximately $363,000 less than the cash receipts recorded in OPS’ accounting system. Since this and other differences were not reconciled by OPS, we were unable to determine the effect they may have on the estimated minimum PSF balance. Further, the beginning PSF balance used by RSPA in its analysis was understated when compared to balances per Treasury, because certain transactions, such as cancellations of previously recorded obligations, were not recorded by RSPA accounting. These Treasury-initiated transactions were not considered in the analysis because OPS did not perform monthly reconciliations of the PSF book balance to the balance with Treasury. The cancellation of obligations increases the available PSF balance. For example, the beginning PSF balance in the analysis for fiscal year 2000 of $15.9 million was $1.1 million less than the Treasury balance of $17 million. This unreconciled difference could have a material impact on the recorded PSF balance or decisions regarding such balance. Finally, we noted that the month-by-month data included in RSPA’s analysis contained obligation amounts that could be misleading. We found that the monthly amounts of obligations for the first 5 months of fiscal year 2000 included approximately $1 million of OSLTF-related obligations. During March 2000, however, these obligations were reimbursed by the OSLTF and were reversed in the analysis. Therefore, RSPA’s overall calculation was not affected. OPS’ lengthy data collection and verification process, used to determine and bill user fees for 750 pipeline operators, contributed to a delay in billing and the subsequent collection of cash. If user fee assessments were mailed out sooner, then collection of cash receipts would likely be accelerated and the minimum required PSF balance would be lower. RSPA has efforts underway to improve this process, including planned implementation of an Internet-based data collection system and a new accounting system. The collection and verification of data used for OPS’ fiscal year 2000 assessment extended over 11 months. For example, the December 31, 1998, data used for the fiscal year 2000 billing were not finalized until late November 1999. The majority of that time was used to update information for NG pipeline operators, one of three types of operators. OPS maintains a database for assessing pipeline and storage facility operators as well as supporting its regulatory activities. Data are updated each year and that process begins with asking NG pipeline operators to complete annual reports, which contain, among other things, details on pipeline mileage that are needed to calculate assessments. After NG pipeline operators submit their annual reports, information is updated in the OPS database. Subsequently, NG pipeline operators, as well as HL pipeline operators and LNG storage facilities (neither of which have to prepare annual reports) are sent annual notices to verify information in the database, which is used for fee assessment purposes. For the fiscal year 2000 assessment, annual report forms were sent to NG pipeline operators in mid-December 1998, and the completed annual reports were due to OPS by March 15, 1999. Later, notices to verify data in the database were sent to all operators in August 1999 with corrections due to OPS in 45 days. After the verification notice was sent, OPS employees responded to operator inquiries and corrections and further updated the database. This process was completed in late November. The extended data collection and verification process contributed to a delay in the mailing of user fee bills, which did not occur until mid-December 1999. The timing of activities is summarized in table 4. Since operator assessments are calculated based on the annual appropriation, the calculation of individual user fee assessments can begin after the appropriation is enacted, which has been in the month of October for the last several years. In recent years, OPS’ operator billing has occurred considerably earlier in the fiscal year. For example, in fiscal year 1994, OPS assessed user fees in July 1994, whereas by fiscal year 1997, OPS was successful in moving the user fee assessment date up to December 1996. Since 1997, OPS has billed operators in mid-December of each fiscal year. However, since the user information on which bills are based is as of December 31 of the previous year, there is still room for improvement in OPS’ data collection and verification process. According to OPS officials, this delay is due to resource limitations. RSPA is planning to improve its current billing procedures. For example, in the summer of 2001, an Internet-based system is scheduled to be implemented that will allow operators to electronically enter pipeline mileage, ownership, and other necessary information directly into the database. This will relieve OPS of a considerable amount of data input and reduce the amount of reconciliatory and investigative efforts for pipeline ownership and mileage. Based on information provided directly by pipeline operators, OPS would be able to generate and mail bills electronically, further reducing the time necessary to bill and collect fees. In addition, in fiscal year 2001, RSPA implemented a new accounting system that includes features anticipated to improve OPS’ billing and collection process. These features include invoicing, payment tracking, maintaining individual customer account balances, and generating follow- up notices for delinquent balances. This should free up OPS resources so staff can concentrate on issuing user fee assessments earlier, which would likely accelerate the collection of fees and reduce the minimum balance needed in the PSF. The use of incorrect or unreliable data and inappropriate assumptions in RSPA’s calculation of the minimum PSF balance resulted in RSPA overstating the necessary minimum balance. Crucial to a reasonable calculation of the PSF minimum balance is an analysis of expected receipts as compared to expected obligations. Until RSPA performs this type of analysis, it will not be able to provide a reasonable estimate of the required minimum PSF balance. In addition, the timing of OPS’ cash receipts is affected by OPS’ untimely data collection and verification process. This process results in delayed billings and likely delays cash receipts, resulting in a larger required minimum PSF balance. OPS’ current efforts to implement a new Internet- based data collection and billing system have the potential to shorten what is currently an extended billing process. Finalizing the operator data on which the fee assessments are based at an earlier date would allow billing to take place shortly after the agency received its appropriation for the fiscal year. Accordingly, fee revenue would likely be available for obligation in a more timely manner and help reduce the required minimum PSF balance. In order to provide for a reasonable calculation of the minimum PSF balance and to improve the user fee billing process, we recommend that the Secretary of the Department of Transportation direct RSPA’s Administrator to take the following actions: Base calculations for future years on an analysis of the timing and amounts of expected obligations and cash collections associated with the level and types of program activities planned. Annually calculate the expected minimum balance for the PSF to take into consideration changes in expected obligations and collections. Take steps, including reconciliation of conflicting data, to ensure that the financial information used in the analysis is accurate and that it includes all of the relevant revenue factors. Complete installation of the Internet-accessible database system allowing on-line input and verification of operator data and electronic mailing of bills. Reengineer the operator data collection and verification processes so that all data on which bills will be based are finalized by October 1 annually to allow for timely billing. DOT generally agreed with our findings, conclusions, and recommendations. In addition, department officials provided technical comments on the draft report, which we have incorporated as appropriate. We are sending copies of this report to congressional committees and subcommittees responsible for transportation safety issues; the Honorable Norman Y. Mineta, Secretary of Transportation; Edward Brigham, the Acting Deputy Administrator of RSPA; and other interested parties. If you have any questions about this report, please contact me at (202) 512- 9508 or John C. Fretwell, Assistant Director, at (202) 512-9382. Key contributors to this report were Richard Kusman, Tarunkant Mithani, and Maria Zacharias. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are also accepted. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St., NW (corner of 4th and G Sts. NW) Washington, DC 20013 Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm E-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
The use of incorrect or unreliable data and inappropriate assumptions in the Research and Special Program Administration's (RSPA) calculation of the minimum Pipeline Safety Fund (PSF) balance caused RSPA to overstate the necessary minimum balance. Crucial to a reasonable calculation of the PSF minimum balance is an analysis of expected receipts as compared to expected obligations. Until RSPA does this type of analysis, it will be unable to reasonably estimate the required minimum PSF balance. In addition, the timing of the Office of Pipeline Safety's (OPS) cash receipts is affected by OPS' slow data collection and verification process. This process results in delayed billings and likely delays cash receipts, resulting in a larger required minimum PSF balance. OPS' current efforts to implement a new Internet-based data collection and billing system could shorten what is now an extended billing process. Finalizing the operator data on which the fee assessments are based at an earlier date would allow billing to take place shortly after the agency received its appropriation for the fiscal year. Accordingly, fee revenue would likely be available for obligation in a more timely manner and help reduce the required minimum PSF balance.
A reverse mortgage is a loan against the borrower’s home that the borrower does not need to repay for as long as the borrower meets certain conditions. These conditions, among others, require that borrowers live in the home, pay property taxes and homeowners’ insurance, maintain the property, and retain the title in his or her name. Reverse mortgages typically are “rising debt, falling equity” loans, in which the loan balance increases and the home equity decreases over time. As the borrower receives payments from the lender, the lender adds the principal and interest to the loan balance, reducing the homeowner’s equity. This is the opposite of what happens in forward mortgages, which are characterized as “falling debt, rising equity” loans. With forward mortgages, monthly loan payments made to the lender add to the borrower’s home equity and decrease the loan balance. The HECM program began in 1988, when Congress authorized HUD to insure reverse mortgages to meet the financial needs of elderly homeowners. While HECMs can provide senior homeowners with multiple types of benefits, including flexibility in how they use the loan funds and protection against owing more than the value of the house when the loan comes due, HECM costs can be substantial. The volume of HECMs made annually has grown rapidly, rising from 157 loans in fiscal year 1990 to more than 112,000 loans in fiscal year 2008. In addition, recent years have seen a large increase in the number of lenders participating in the HECM program, with more than 1,500 lenders originating their first HECM in 2008, bringing the total number of HECM lenders to over 2,700. A number of federal and state agencies have roles in overseeing the reverse mortgage market. These agencies include HUD, which administers the HECM program and oversees entities that provide mandatory counseling to prospective HECM borrowers. In addition, the Federal Trade Commission (FTC), federal and state banking regulators, and state insurance regulators are involved with various aspects of consumer protections for HECM borrowers. Various state and federal agencies have some responsibility for assessing marketing for reverse mortgage products, including FTC, federal and state banking regulators, and HUD. The agencies each have a responsibility for different segments of the reverse mortgage market, but have reported taking few, if any, enforcement actions against an entity as a result of misleading reverse mortgage marketing. FTC has responsibility for protecting consumers against unfair or deceptive practices originating from nonbank financial companies, such as mortgage brokers. FTC officials said they have not systematically searched for potentially misleading reverse mortgage marketing, but noted that they are maintaining an awareness of the potential risks associated with reverse mortgage marketing and have formed a task force of state and federal regulators and law enforcement agencies, in part to learn about complaints related to reverse mortgages. In addition, the federal banking regulators—the Board of Governors of the Federal Reserve System (Federal Reserve), Office of the Comptroller of the Currency (OCC), Office of Thrift Supervision (OTS), the Federal Deposit Insurance Corporation (FDIC) and the National Credit Union Administration (NCUA)—include a review of reverse mortgage marketing materials in their compliance examinations of lenders for whom they have responsibility, but, because few of their regulated lenders offer reverse mortgages, they have not conducted many examinations that have included these loans. Like FTC, federal banking regulators are maintaining an awareness of the potential risks associated with reverse mortgages, which could include those associated with reverse mortgage marketing. For example, the Federal Financial Institutions Examination Council—the interagency body that includes the federal banking regulators and develops guidance for federal bank examiners—recently formed a working group on reverse mortgages. Finally, some HECM lenders are regulated at the state level, with HECM marketing materials subject to state compliance examinations. Information we obtained from 22 of the 35 state banking regulators that responded to our information request indicated that their states routinely examine marketing materials as part of compliance examinations. However, only 1 state banking regulator—the Idaho Department of Finance—reported taking action against a lender because of reverse mortgage marketing. In addition, HUD exercises limited regulatory authority over the marketing activity of HECM lenders to ensure that lenders’ advertisements do not imply endorsement by HUD or the Federal Housing Administration. HUD officials cited one instance in which it referred a lender to the Mortgagee Review Board for misrepresenting the HECM as a “government rescue loan.” However, HUD officials said they do not actively monitor HECM marketing, and do not review HECM marketing materials as part of routine assessments of HECM lenders. Some agencies with whom we spoke indicated that while complaints are one factor that could trigger more extensive assessments of marketing materials, they have received few, if any, complaints about reverse mortgage marketing. However, FTC officials noted that the low volume of complaints could be a result of consumers not being aware that they have been deceived, not knowing to whom to complain, or elderly consumers being less likely to complain. While the extent of misleading HECM marketing is unknown, our limited review of marketing materials found some examples of claims that were potentially misleading because they were inaccurate, incomplete, or employed questionable sales tactics. Among the materials we reviewed, we found 26 different entities that made potentially misleading claims in their HECM marketing materials. This group includes entities regulated by each of the federal banking regulators with whom we spoke, as well a s FTC and state regulators; it also includes both members and nonmembers of NRMLA. We selected seven advertisements that represented these claims and submitted them to the regulators for review. In general, the officials with whom we spoke agreed that the claims in six of the seven advertisements raised some degree of concern and might prompt further investigation. Several of the officials noted that they would need to consider the fuller context of the advertisement to determine if the claims were misleading and the level of action they would take if these six advertisements were the subject of complaints or compliance examinations. The six potentially misleading claims that we identified, and agency officials generally agreed raised concern, were as follows: “Never owe more than the value of your home”: The claim is potentially misleading because a borrower or heirs of a borrower would owe the full loan balance—even if it were greater than the value of the house—if the borrower or heirs chose to keep the house when the loan became due. This was the most common of the potentially misleading statements we found in the marketing materials we reviewed. This claim was made by HUD itself in its instructions to approved HECM lenders; however, in December 2008, HUD issued guidance to HECM lenders explaining the inaccuracy of this claim. Implications that the reverse mortgage is a “government benefit” or otherwise, not a loan: While HECMs are government-insured, the product is a loan that borrowers or their heirs must repay, not a benefit. Examples of this type of claim include the following: “You may be qualified for this government-sponsored benefit program,” and “Access the equity in your home without having to sell, move, or take out a loan.” “Lifetime income” or “Can’t outlive loan”: Although borrowers can choose to receive HECM funds as monthly tenure payments, even under this option, payments will not continue once the loan comes due (e.g., when the borrower moves out of the house or violates other conditions of the mortgage). “Never lose your home”: This claim is potentially misleading because a lender could foreclose on a HECM borrower’s home if the borrower did not pay property taxes and hazard insurance or did not maintain the house. Misrepresenting government affiliation: An example of this type of claim would include use of government symbols or logos and claims that imply that the lender is a government agency. Claims of time and geographic limits: These claims falsely imply that HECM loans are limited to a certain geographic area, or that the consumer must respond within a certain time to qualify for the loan. Examples include “must call within 72 hours,” and “deadline extended,” as well as the claim that a consumer’s residence is “located in a Federal Housing Authority qualifying area.” The potentially misleading marketing claims we identified suggest that some HECM providers may not be maintaining sufficient focus on or awareness of federal marketing standards. Furthermore, consumers who have not been cautioned about such claims could pursue HECMs with misunderstandings about the product. Therefore, the report we are issuing today recommends that HUD, FTC, and the federal banking regulators take steps to strengthen oversight and enhance industry and consumer awareness of the types of marketing claims discussed in this testimony. Concerns exist that reverse mortgage borrowers could be vulnerable to inappropriate cross-selling, a practice involving the sale of financial or insurance products that are unsuitable for the borrower’s financial situation using the borrower’s reverse mortgage funds. While certain annuity products may be suitable for some HECM borrowers, such as those who wish to receive payments for life regardless of where they live, there is concern that elderly reverse mortgage borrowers may be sold other products that may be inappropriate to the borrower’s circumstances. For example, there is concern that elderly reverse mortgage borrowers may be sold deferred annuities, where payments may not begin for many years and high fees may be charged for early access to the money. Because cross-selling typically involves the sale of insurance products generally regulated at the state level, the role of federal agencies in addressing the issue of cross-selling in conjunction with HECMs has been limited and largely has been focused on consumer education and disclosures. However, with the passage of HERA, HUD now has responsibility for enforcing the cross-selling provisions in the legislation and is in the preliminary stages of developing regulations to implement them. The provisions are intended to curb the sale of unsuitable financial products to consumers using HECM funds. According to HUD officials, HUD is drafting a Federal Register notification to solicit feedback on issues concerning these provisions, including HUD’s ability to monitor and enforce them; the usefulness of disclosures, education, and counseling in preventing cross-selling; what would constitute appropriate firewalls between a firm’s reverse mortgage sales and sales of other financial products; and what types of financial products should be covered. HUD has also instructed lenders that until HUD issues more definitive guidance, lenders must not condition a HECM on the purchase of any other financial or insurance product, and should strive to establish firewalls and other safeguards to ensure there is no undue pressure or appearance of pressure for a HECM borrower to purchase another product. A number of state insurance regulators have reported cases of inappropriate cross-selling involving violations of state laws governing the sale of insurance and annuities. Many states have passed suitability laws that are designed to protect consumers from being sold unsuitable insurance products, including annuities. Of the 29 state insurance regulators that responded to questions we sent all states and the District of Columbia, 8 said that from 2005 through January 2009, they had at least one case of an insurance agent selling an unsuitable insurance product that a consumer had purchased using reverse mortgage funds. For example, an official at the Insurance Division of the Hawaii Department of Commerce and Consumer Affairs described a case in which an independent mortgage broker was prosecuted for misrepresentation of an annuity product. The broker, who also owned his own insurance company, deceived 15 clients by including paperwork for an annuity in their HECM closing documents without their knowledge. In another case, a sales manager of an insurance company violated the Maine Insurance Code by allowing transactions that were not in the best interest of the customer. The sales manager had arranged for a representative of a large reverse mortgage lender to speak with his sales agents about reverse mortgages. The agents then referred 14 clients to the reverse mortgage lender, all of whom obtained reverse mortgages. One particular client, an 81-year old widow, was contacted continually until she obtained her reverse mortgage funds, and was then sold a deferred annuity. The interest rate accruing on the reverse mortgage was 4.12 percent, and the deferred annuity earned only 3.25 percent. HUD’s internal controls for HECM counseling do not provide reasonable assurance of compliance with HUD requirements. HUD has a range of internal control mechanisms to help ensure that HECM counselors comply with counseling requirements. These controls include (1) counseling standards as set forth in regulations, mortgagee letters, and a counseling protocol; (2) a counselor training and examination program, and (3) a Certificate of HECM Counseling (counseling certificate) that, once signed by the counselor and the counselee, should provide HUD with assurance that counselors complied with counseling standards and that prospective borrowers were prepared to make informed decisions. Although federal standards encourage agencies to test the effectiveness of their internal controls, HUD has not done so for its controls for HECM counseling. Our independent evaluation of 15 HECM counseling sessions found that counselors did not consistently comply with HECM counseling requirements. To test counselor compliance with key HECM counseling requirements, GAO staff posed as prospective HECM borrowers for 15 counseling sessions offered by 11 different agencies. For each session, we determined whether the counselors covered required topics, primarily those referenced in the counseling certificate. The certificate identifies or refers to counseling requirements originally set forth in statute, HUD regulations, or mortgagee letters. Our undercover counselees participated in telephone counseling sessions because HUD estimated that about 90 percent of all HECM counseling sessions were conducted by telephone. All but one of the counselors who conducted our counseling sessions were examination-certified by HUD to provide HECM counseling. Although none of the 15 counselors covered all of the required topics, all of them provided useful and generally accurate information about reverse mortgages and discussed key program features. For example, most counselors explained that the loan would become due and payable when no borrower lives on the property, and that borrowers must pay taxes and insurance. Counselors also often supplemented their discussions with useful information, such as a description of factors that affect available interest rates and the fact that borrowers would receive monthly statements from the lender, even though this information is not specifically referred to on the counseling certificate. However, despite certifying on the counseling certificate that they had covered all of the information HUD requires, all of the counselors omitted at least some required information. The required information that counselors most frequently omitted included the following: Other housing, social service, health, and financial options: Seven of the 15 counselors did not discuss options, other than a HECM, that might be available to a homeowner, such as considering other living arrangements, meal programs, or health services that local social service agencies might provide. Our findings are consistent with findings in AARP and HUD Office of Inspector General reports. Other home equity conversion options: The same 7 counselors, likewise, did not discuss other types of (and potentially lower-cost) reverse mortgages that state or local governments might sponsor for specific purposes. For example, some state governments provide reverse mortgages that do not need to be repaid until the house is sold for payment of taxes or making major repairs. The financial implications of entering into a HECM: Fourteen of the 15 counselors only partially met this requirement, and 1 completely did not meet the requirement, because they omitted information that HUD directs counselors to convey. For example, 6 of the counselors did not provide estimates of the maximum amount of funds that might be available to the counselee under the HECM payment plan options. A HUD official said that this information would help counselees understand how reverse mortgages would address their financial situations. Additionally, 14 counselors did not tell counselees that they could elect to have the loan provider withhold funds to pay property taxes and insurance. A disclosure that a HECM may affect eligibility for assistance under other federal and state programs: While most counselors discussed the tax consequences of a HECM, 6 of 15 did not indicate that eligibility for some federal and state programs could be affected if borrowers had more money in their bank accounts than allowed under such programs’ terms. Asking if a homeowner had signed a contract or agreement with an estate planning service: HUD implemented this requirement based on a statutory provision intended to protect HECM borrowers from paying excessive fees for third-party services of little or no value. However, 14 of the 15 counselors did not ask this question, although of the 14, 4 cautioned the undercover counselees that such services were unnecessary to obtain a HECM. In addition to requiring HECM counselors to convey certain information, HUD requires them to record the length of each counseling session on the counseling certificate. Although HUD has not issued guidance on the subject, HUD officials told us that the recorded time should reflect only the time spent counseling the client. However, 6 of the 15 counselors for our undercover sessions overstated the length of the counseling sessions on the counseling certificates. In 3 of these cases, the sessions ranged from 22 to 30 minutes, but the recorded times ranged from 45 minutes to 1 hour. In another instance, the session lasted about 20 minutes, but the counselor recorded 30 minutes. These 4 sessions omitted much of the required information, particularly the discussion of options and various aspects of the financial implications of a HECM. The counselors for the remaining 2 sessions recorded the sessions as lasting 2 hours when 1 lasted 45 minutes, and the other 57 minutes. Another area of noncompliance we identified concerned the requirement that counseling agencies assess a client’s ability to pay the counseling fee. In May 2008, HUD issued instructions allowing counseling agencies to charge a fee of up to $125 for HECM counseling, as long as the fee did not create a financial hardship for the client. The instructions require counseling agencies to make this determination by considering factors including, but not limited to, the client’s income and debt obligations. While HUD guidance states that agencies may use “objective criteria” in assessing a client’s ability to pay, the guidance does not specify what types of criteria are appropriate. Consistent with HUD requirements, 12 of the 15 counseling agency staff responsible for charging the fee, whether intake staff or counselors, informed our undercover counselees of the fee in advance of the session and charged $125 or less. However, staff at most of the agencies did not collect the minimum amount of information that HUD requires to assess the counselee’s ability to pay. For example, for 4 of the 15 sessions, agency intake staff took the counselee’s credit card information up front, without obtaining any information about income and debt; and counselors for four other sessions, asked about the undercover counselees’ income but not their debts. In the absence of clear guidance, similarly situated counselees could be treated differently, and those facing financial hardships might be paying for counseling when they should not have to. Because of the weaknesses in HUD’s internal controls, some prospective borrowers may not be receiving the information necessary to make informed decisions about obtaining a HECM. Therefore, we are recommending that HUD take steps to improve the effectiveness of its internal controls, such as by verifying the content and length of HECM counseling sessions. In closing, HECMs can provide senior homeowners with multiple types of benefits, but borrowers may not always fully understand the complexities of the product’s terms and costs. Thus, the types of marketing claims discussed in this report, as well as the potential for seniors to be sold unsuitable products with their HECM funds, are causes for concern, particularly in a market with potential for substantial growth. These factors underscore the need for improvements in HUD’s controls over HECM counseling. Mr. Chairman, Ranking Member Martinez, and Members of the Special Committee, this concludes my prepared statement. I would be happy to respond to any questions that you may have at this time. For further information about this testimony, please contact Mathew J. Scirè, Director, at 202-512-8678 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Steven K. Westley (Assistant Director), Sonja J. Bensen, Christine A. Hodakievic, Winnie Tsen, and Barbara M. Roesmann. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Reverse mortgages--a type of loan against the borrower's home that is available to seniors--are growing in popularity. These mortgages allow seniors to convert their home equity into flexible cash advances while living in their homes. However, concerns have emerged about the adequacy of consumer protections for this product. Most reverse mortgages are made under the Department of Housing and Urban Development's (HUD) Home Equity Conversion Mortgage (HECM) program. HUD insures the mortgages, which are made by private lenders, and oversees the agencies that provide mandatory counseling to prospective HECM borrowers. GAO was asked to examine issues and federal activities related to (1) misleading HECM marketing, (2) the sale of potentially unsuitable products in conjunction with HECMs, and (3) the oversight of HECM counseling providers. This testimony is based on a GAO report being released today (GAO-09-606). While HECMs have the potential to play a key role in meeting the needs of seniors facing financial hardship or seeking to improve their quality of life, the product is relatively complex and costly and the population it serves is vulnerable. GAO's work identified areas of consumer protection that require further attention, including the area of HECM marketing. Various federal agencies have responsibility for protecting consumers from the misleading marketing of mortgages. Although these agencies have reported few HECM marketing complaints, GAO's limited review of selected marketing materials for reverse mortgages found some examples of claims that were potentially misleading because they were inaccurate, incomplete, or employed questionable sales tactics. Federal agency officials indicated that some of these claims raised concerns. For example, the claim of "lifetime income" is potentially misleading because there are a number of circumstances in which the borrower would no longer receive cash advances. Consumers who have not been cautioned about such claims could pursue HECMs with misunderstandings about the product. To date, federal agencies have had a limited role in addressing concerns about the sale of potentially unsuitable insurance and other financial products in conjunction with HECMs (known as "inappropriate cross-selling"). States generally regulate insurance products, and some of the states GAO contacted reported cases of inappropriate cross-selling involving violations of state laws governing the sale of insurance and annuities. HUD is responsible for implementing a provision in the Housing and Economic Recovery Act of 2008 that is intended to restrict inappropriate cross-selling, but the agency is in the preliminary stages of developing regulations. HUD's internal controls do not provide reasonable assurance that counseling providers are complying with HECM counseling requirements. GAO's undercover participation in 15 HECM counseling sessions found that while the counselors generally conveyed accurate and useful information, none of the counselors covered all of the topics required by HUD, and some overstated the length of the sessions in HUD records. For example, 7 of the 15 counselors did not discuss required information about alternatives to HECMs. HUD has several internal controls designed to ensure that counselors convey required information to prospective HECM borrowers, but has not tested the effectiveness of these controls and lacks procedures to ensure that records of counseling sessions are accurate. Because of these weaknesses, some prospective borrowers may not be receiving the information necessary to make informed decisions about obtaining a HECM.
Superior Bank was formed in 1988 when the Coast-to-Coast Financial Corporation, a holding company owned equally by the Pritzker and Dworman families, acquired Lyons Savings, a troubled federal savings and loan association. From 1988 to 1992, Superior Bank struggled financially and relied heavily on an assistance agreement from the Federal Savings and Loan Insurance Corporation (FSLIC). Superior’s activities were limited during the first few years of its operation, but by 1992, most of the bank’s problems were resolved and the effects of the FSLIC agreement had diminished. OTS, the primary regulator of federally chartered savings institutions, had the lead responsibility for supervising Superior Bank while FDIC, with responsibility to protect the deposit insurance fund, acted as Superior’s backup regulator. By 1993, both OTS and FDIC had given Superior a composite CAMEL “2” rating and, at this time, FDIC began to rely only on off-site monitoring of superior. In 1993, Superior’s management began to focus on expanding the bank’s mortgage lending business by acquiring Alliance Funding Company. Superior adopted Alliance’s business strategy of targeting borrowers nationwide with risky credit profiles, such as high debt ratios and credit histories that included past delinquencies—a practice known as subprime lending. In a process known as securitization, Superior then assembled the loans into pools and sold interest in these pools—such as rights to principal and/or interest payments—through a trust to investors, primarily in the form of AAA-rated mortgage securities. To enhance the value of these offerings, Superior retained the securities with the greatest amount of risk and provided other significant credit enhancements for the less risky securities. In 1995, Superior expanded its activities to include the origination and securitization of subprime automobile loans. In December 1998, FDIC first raised concerns about Superior’s increasing levels of high-risk, subprime assets and growth in retained or residual interests. However, it was not until January 2000 that OTS and FDIC conducted a joint exam and downgraded Superior’s CAMELS rating to a “4,” primarily attributed to the concentration of residual interest holdings. At the end of 2000, FDIC and OTS noted that the reported values of Superior’s residual interest assets were overstated and that the bank’s reporting of its residual interest assets was not in compliance with the Statement of Financial Accounting Standards (FAS) No. 125. Prompted by concerns from OTS and FDIC, Superior eventually made a number of adjustments to its financial statements. In mid-February 2001, OTS issued a Prompt Corrective Action (PCA) notice to Superior because the bank was significantly undercapitalized. On May 24, OTS approved Superior’s PCA capital plan. Ultimately, the plan was never implemented, and OTS closed the bank and appointed FDIC as Superior’s receiver on July 27, 2001. (A detailed chronology of the events leading up to Superior’s failure is provided in App. I.) Primary responsibility for the failure of Superior Bank resides with its owners and managers. Superior’s business strategy of originating and securitizing subprime loans appeared to have led to high earnings, but more importantly its strategy resulted in a high concentration of extremely risky assets. This high concentration of risky assets and the improper valuation of these assets ultimately led to Superior’s failure. In 1993, Superior Bank began to originate and securitize subprime home mortgages in large volumes. Later, Superior expanded its securitization activities to include subprime automobile loans. Although the securitization process moved the subprime loans off its balance sheet, Superior retained the riskier interests in the proceeds from the pools of securities it established. Superior’s holdings of this retained interest exceeded its capital levels going as far back as 1995. Retained or residual interests are common in asset securitizations and often represent steps that the loan originator takes to enhance the quality of the interests in the pools that are offered for sale. Such enhancements can be critical to obtaining high credit ratings for the pool’s securities. Often, the originator will retain the riskiest components of the pool, doing so to make the other components easier to sell. The originator’s residual interests, in general, will represent the rights to cash flows or other assets after the pool’s obligations to other investors have been satisfied. Overcollateralization assets are another type of residual interest that Superior held. To decrease risk to investors, the originator may overcollateralize the securitization trust that holds the assets and is responsible for paying the investors. An originator can overcollateralize by selling the rights to $100 in principal payments, for instance, while putting assets worth $105 into the trust, essentially providing a cushion, or credit enhancement, to help ensure that the $100 due investors is paid in event of defaults in the underlying pool of loans (credit losses). The originator would receive any payments in excess of the $100 interest that was sold to investors after credit losses are paid from the overcollateralized portion. As shown in figure 1, Superior’s residual interests represented approximately 100 percent of tier 1 capital on June 30, 1995. By June 30, 2000, residual interest represented 348 percent of tier 1 capital. This level of concentration was particularly risky given the complexities associated with achieving a reasonable valuation of residual interests. Superior’s practice of targeting subprime borrowers increased its risk. By targeting borrowers with low credit quality, Superior was able to originate loans with interest rates that were higher than market averages. The high interest rates reflected, at least in part, the relatively high credit risk associated with these loans. When these loans were then pooled and securitized, their high interest rates relative to the interest rates paid on the resulting securities, together with the high valuation of the retained interest, enabled Superior to record gains on the securitization transactions that drove its apparently high earnings and high capital. A significant amount of Superior’s revenue was from the sale of loans in these transactions, yet more cash was going out rather than coming in from these activities. In addition to the higher risk of default related to subprime lending, there was also prepayment risk. Generally, if interest rates decline, a loan charging an interest rate that is higher than market averages becomes more valuable to the lender. However, lower interest rates could also trigger higher than predicted levels of loan prepayment—particularly if the new lower interest rates enable subprime borrowers to qualify for refinancing at lower rates. Higher-than-projected prepayments negatively impact the future flows of interest payments from the underlying loans in a securitized portfolio. Additionally, Superior expanded its loan origination and securitization activities to include automobile loans. The credit risk of automobile loans is inherently higher than that associated with home mortgages, because these loans are associated with even higher default and loss rates. Auto loan underwriting is divided into classes of credit quality (most commonly A, B, and C). Some 85 percent of Superior Bank’s auto loans went to people with B and C ratings. In Superior’s classification system, these borrowers had experienced credit problems in the past because of unusual circumstances beyond their control (such as a major illness, job loss, or death in the family) but had since resolved their credit problems and rebuilt their credit ratings to a certain extent. As with its mortgage securitizations, Superior Bank was able to maintain a high spread between the interest rate of the auto loans and the yield that investors paid for the securities based on the pooled loans. However, Superior’s loss rates on its automobile loans as of December 31, 1999 were twice as high as Superior’s management had anticipated. Superior Bank’s business strategy rested heavily on the value assigned to the residual interests that resulted from its securitization activities. However, the valuation of residual interests is extremely complex and highly dependent on making accurate assumptions regarding a number of factors. Superior overvalued its residual interests because it did not discount to present value the future cash flows that were subject to credit losses. When these valuations were ultimately adjusted, at the behest of the regulators, the bank became significantly undercapitalized and eventually failed. There are significant valuation issues and risks associated with residual interests. Generally, the residual interest represents the cash flows from the underlying mortgages that remain after all payments have been made to the other classes of securities issued by the trust for the pool, and after the fees and expenses have been paid. As the loan originator, Superior Bank was considered to be in the “first-loss” position (i.e., Superior would suffer any credit losses suffered by the pool, before any other investor.) Credit losses are not the only risks held by the residual interest holder. The valuation of the residual interest depends critically on how accurately future interest rates and loan prepayments are forecasted. Market events can affect the discount rate, prepayment speed, or performance of the underlying assets in a securitization transaction and can swiftly and dramatically alter their value. The Financial Accounting Standards Board (FASB) recognized the need for a new accounting approach to address innovations and complex developments in the financial markets, such as the securitization of loans. Under FAS No. 125, “Accounting for Transfers and Servicing of Financial Assets and Extinguishments of Liabilities,” which became effective after December 31, 1996, when a transferor surrenders control over transferred assets, it should be accounted for as a sale. The transferor should recognize that any retained interest in the transferred assets should be reported in its statement of financial position based on the fair value. The best evidence of fair value is a quoted market price in an active market, but if there is no market price, the value must be estimated. In estimating the fair value of retained interests, valuation techniques include estimating the present value of expected future cash flows using a discount rate commensurate with the risks involved. The standard states that those techniques shall incorporate assumptions that market participants would use in their estimates of values, future revenues, and future expenses, including assumptions about interest rates, default, prepayment, and volatility. In 1999, FASB explained that when estimating the fair value for retained interests used as a credit enhancement, it should be discounted from the date when it is estimated to become available to the transferor. Superior Bank did not properly value the residual interest assets it reported on its financial statements. Since those assets represented payments that were to be received in the future only after credit losses were reimbursed, they needed to be discounted at an appropriate risk- adjusted rate, in order to recognize that a promise to pay in the future is worth less than a current payment. Superior did not use discounting when valuing its residual interest related to overcollateralization. However, as a credit enhancement, the overcollateralized asset is restricted in use under the trust and not available to Superior until losses have been paid under the terms of the credit enhancement. The result was that Superior Bank reported assets, earnings, and capital that were far in excess of their true values. In addition, there were other issues with respect to Superior’s compliance with FAS No. 125. When Superior finally applied the appropriate valuation techniques and related accounting to the residual interests in early 2001, at the urging of OTS, Superior was forced to take a write-off against its capital and became “significantly undercapitalized.” Federal regulators now have serious concerns about the quality of Ernst & Young’s audit of Superior Bank’s financial statements for the fiscal year ending June 30, 2000. This audit could have highlighted the problems that led to Superior Bank’s failure but did not. Regulators’ major concerns related to the audit include (1) the inflated valuation of residual interest in the financial statements and (2) the absence of discussion on Superior’s ability to continue in business in the auditor’s report. The accounting profession plays a vital role in the governance structure for the banking industry. In addition to bank examinations, independent certified public accountant audits are performed to express an opinion on the fairness of bank’s financial statements and to report any material This concept is reiterated in FASB’s A Guide to Implemention of Statement 125 on Accounting for Transfers and Servicing of Financial Assets and Extinguishments of Liabilities: Questions and Answers, Issued July 1999 and revised September 1999. When estimating the fair value of credit enhancements (retained interest), the transferor’s assumptions should include the period of time that its use of the asset is restricted, reinvestment income, and potential losses due to uncertainties. One acceptable valuation technique is the “cash out” method, in which cash flows are discounted from the date that the credit enhancement becomes available. weaknesses in internal controls. Auditing standards require public accountants rendering an opinion on financial statements to consider the need to disclose conditions that raise a question about an entity’s ability to continue in business. Audits should provide useful information to federal regulators who oversee the banks, depositors, owners, and the public. When financial audits are not of the quality that meets auditing standards, this undermines the governance structure of the banking industry. Federal regulators believed that Ernst & Young auditors’ review of Superior’s valuation of residuals failed to identify the overvaluation of Superior’s residual interests in its fiscal year 2000 financial statements. Recognizing a significant growth in residual assets, federal regulators performed a review of Superior’s valuation of its residuals for that same year and found that it was not being properly reported in accordance with Generally Accepted Accounting Principles (GAAP). The regulators believed the incorrect valuation of the residuals had resulted in a significant overstatement of Superior’s assets and capital. Although Ernst & Young’s local office disagreed with the regulators findings, Ernst & Young’s national office concurred with the regulators. Subsequently, Superior revalued these assets resulting in a $270 million write-down of the residual interest value. As a result, Superior’s capital was reduced and Superior became significantly undercapitalized. OTS took a number of actions, but ultimately had to close Superior and appoint FDIC as receiver. An FDIC official stated that Superior had used this improper valuation technique not only for its June 30, 2000, financial statements, but also for the years 1995 through 1999. To the extent that was true, Superior’s earnings and capital were likely overstated during those years, as well. However, in each of those fiscal years, from 1995 through 2000, Superior received an unqualified, or “clean,” opinion from the Ernst & Young auditors. In Ernst & Young’s audit opinion, there was no disclosure of Superior’s questionable ability to continue as a going concern. Yet, 10 months after the date of Ernst & Young’s audit opinion on September 22, 2000, Superior Bank was closed and placed into receivership. Auditing standards provide that the auditor is responsible for evaluating “whether there is a substantial doubt about the entity's ability to continue as a going concern for a reasonable period of time.” This evaluation should be based on the auditor's “knowledge of relevant conditions and events that exist at or have occurred prior to the completion of fieldwork.” FDIC officials believe that the auditors should have known about the potential valuation issues and should have evaluated the "conditions and events" relating to Superior's retained interests in securitizations and the subsequent impact on capital requirements. FDIC officials also believe that the auditors should have known about the issues at the date of the last audit report, and there was a sufficient basis for the auditor to determine that there was “substantial doubt” about Superior's “ability to continue as a going concern for a reasonable period of time.” Because Ernst & Young auditors did not reach this conclusion in their opinion, FDIC has expressed concerns about the quality of the audit of Superior's fiscal year 2000 financial statements. FDIC has retained legal and forensic accounting assistance to conduct an investigation into the failure of Superior Bank. This investigation includes not only an examination of Superior’s lending and investment practices but also a review of the bank's independent auditors, Ernst & Young. It involves a thorough review of the accounting firm's audit of the bank's financial statements and role as a consultant and advisor to Superior on valuation issues. The major accounting and auditing issues in this review will include (1) an evaluation of the over-collateralized assets valuation as well as other residual assets, (2) whether “going concern” issues should have been raised had Superior Bank's financials been correctly stated, and (3) an evaluation of both the qualifications and independence of the accounting firm. The target date for the final report from the forensic auditor is May 1, 2002. OTS officials told us that they have opened a formal investigation regarding Superior’s failure and have issued subpoenas to Ernst & Young, among others. Our review of OTS’s supervision of Superior Bank found that the regulator had information, going back to the mid-1990s, that indicated supervisory concerns with Superior Bank’s substantial retained interests in securitized, subprime home mortgages and recognition that the bank’s soundness depended critically on the valuation of these interests. However, the high apparent earnings of the bank, its apparently adequate capital levels, and supervisory expectations that the ownership of the bank would provide adequate support in the event of problems appear to have combined to delay effective enforcement actions. Problems with communication and coordination between OTS and FDIC also created a delay in supervisory response after FDIC raised serious questions about the operations of Superior. By the time that the PCA directive was issued in February 2001, Superior’s failure was probably inevitable. As Superior’s primary regulator, OTS had the lead responsibility for monitoring the bank’s safety and soundness. Although OTS identified many of the risks associated with Superior’s business strategy as early as 1993, it did not exercise sufficient professional skepticism with respect to the “red flags” it identified with regards to Superior’s securitization activities. Consequently, OTS did not fully recognize the risk profile of the bank and thus did not address the magnitude of the bank’s problems in a timely manner. Specifically: OTS’s assessment of Superior’s risk profile was clouded by the bank’s apparent strong operating performance and higher-than-peer leverage capital; OTS relied heavily on management’s expertise and assurances; and OTS relied on the external audit reports without evaluating the quality of the external auditors’ review of Superior’s securitization activities. OTS’s ratings of Superior from 1993 through 1999 appeared to have been heavily influenced by Superior’s apparent high earnings and capital levels. Beginning in 1993, OTS had information showing that Superior was engaging in activities that were riskier than those of most other thrifts and merited close monitoring. Although neither subprime lending nor securitization is an inherently unsafe or unsound activity, both entail risks that bank management must manage and its regulator must consider in its examination and supervisory activities. While OTS examiners viewed Superior Bank’s high earnings as a source of strength, a large portion of these earnings represented estimated payments due sometime in the future and thus were not realized. These high earnings were also indicators of the riskiness of the underlying assets and business strategy. Moreover, Superior had a higher concentration of residual interest assets than any other thrift under OTS’s supervision. However, OTS did not take supervisory action to limit Superior’s securitization activities until after the 2000 examination. According to OTS’s Regulatory Handbook, greater regulatory attention is required when asset concentrations exceed 25 percent of a thrift’s core capital. As previously discussed, Superior’s concentration in residual interest securities equaled 100 percent of tier 1 capital in June 30, 1995 and grew to 348 percent of tier 1 capital in June 30, 2000. However, OTS’s examination reports during this period reflected an optimistic understanding of the implications for Superior Bank. The examination reports consistently noted that the risks associated with such lending and related residual interest securities were balanced by Superior’s strong earnings, higher-than-peer leverage capital, and substantial reserves for loan losses. OTS examiners did not question whether the ongoing trend of high growth and concentrations in subprime loans and residual interest securities was a prudent strategy for the bank. Consequently, the CAMELS ratings did not accurately reflect the conditions of those components. Superior’s business strategy as a lender to high-risk borrowers was clearly visible in data that OTS prepared comparing it to other thrifts of comparable size. Superior’s ratio of nonperforming assets to total assets in December 1998 was 233 percent higher than the peer group’s median. Another indicator of risk was the interest rate on the mortgages that Superior had made with a higher rate indicating a riskier borrower. In 1999, over 39 percent of Superior’s mortgages carried interest rates of 11 percent or higher. Among Superior’s peer group, less than 1 percent of all mortgages had interest rates that high. OTS’s 1997 examination report for Superior Bank illustrated the influence of Superior’s high earnings on the regulator’s assessment. The 1997 examination report noted that Superior’s earnings were very strong and exceeded industry averages. The report stated that the earnings were largely the result of large imputed gains from the sale of loans with high interest rates and had not been realized on a cash flow basis. Furthermore, the report recognized that changes in prepayment assumptions could negatively impact the realization of the gains previously recognized. Despite the recognition of the dependence of Superior’s earnings on critical assumptions regarding prepayment and actual loss rates, OTS gave Superior Bank the highest composite CAMELS rating, as well as the highest rating for four of the six CAMELS components—asset quality, management, earnings, and sensitivity to market risk—at the conclusion of its 1997 examination. OTS consistently assumed that Superior’s management had the necessary expertise to safely manage the complexities of Superior’s securitization activities. In addition, OTS relied on Superior’s management to take the necessary corrective actions to address the deficiencies that had been identified by OTS examiners. Moreover, OTS expected the owners of Superior’ to come to the bank’s financial rescue if necessary. These critical assumptions by OTS ultimately proved erroneous. From 1993 through 1999, OTS appeared to have had confidence in Superior’s management’s ability to safely manage and control the risks associated with its highly sophisticated securitization activities. As an illustration of OTS reliance on Superior’s management assurances, OTS examiners brought to management’s attention in the 1997 and 1999 examinations that underlying mortgage pools had prepayment rates exceeding those used in the revaluation. OTS examiners accepted management’s response that the prepayment rates observed on those subpools were abnormally high when compared with historical experience, and that they believed sufficient valuation allowances had been established on the residuals to prevent any significant changes to capital. It was not until the 2000 examination, when OTS examiners demanded supporting documentation concerning residual interests, that they were surprised to learn that such documentation was not always available. OTS’s optimistic assessment of the capability of Superior’s management continued through 1999. For example, OTS noted in its 1999 examination report that the weaknesses it had detected during the examination were well within the board of directors’ and management’s capabilities to correct. OTS relied on Superior Bank’s management and board of directors to take the necessary corrective action to address the numerous deficiencies OTS examiners identified during the 1993 through 1999 examinations. However, many of the deficiencies remained uncorrected even after repeated examinations. For example, OTS expressed concerns in its 1994 and 1995 examinations about the improper inclusion of reserves for the residual interest assets in the Allowance for Loan and Lease Losses. This practice had the net effect of overstating the institution’s total capital ratio. OTS apparently relied on management’s assurances that they would take the appropriate corrective action, because this issue was not discussed in OTS’s 1996, 1997, or 1999 examination reports. However, OTS discovered in its 2000 examination that Superior Bank had not taken the agreed-upon corrective action, but in fact had continued the practice. Similarly, OTS found in both its 1997 and 1999 examinations that Superior was underreporting classified or troubled loans in its Thrift Financial Reports (TFR). In the 1997 examination, OTS found that not all classified assets were reported in the TFR and obtained management’s agreement to ensure the accuracy of subsequent reports. In the 1999 examination, however, OTS found that $43.7 million in troubled assets had been shown as repossessions on the most recent TFR, although a significant portion of these assets were accorded a “loss” classification in internal reports. As a result, actual repossessions were only $8.4 million. OTS conducted a special field visit to examine the auto loan operations in October 1999, but the review focused on the classification aspect rather than the fact that management had not been very conservative in charging-off problem auto credits, as FDIC had pointed out. OTS also appeared to have assumed that the wealthy owners of Superior Bank would come to the bank’s financial rescue when needed. The 2000 examination report demonstrated OTS’s attitude towards its supervision of Superior by stating that failure was not likely due to the institution’s overall strength and financial capacity and the support of the two ownership interests comprised of the Alvin Dworman and Jay Pritzker families. OTS’s assumptions about the willingness of Superior’s owners not to allow the institution to fail were ultimately proven false during the 2001 negotiations to recapitalize the institution. As a result, the institution was placed into receivership. OTS also relied on the external auditors and others who were reporting satisfaction with Superior’s valuation method. In previous reports, GAO has supported having examiners place greater reliance on the work of external auditors in order to enhance supervisory monitoring of banks. Some regulatory officials have said that examiners may be able to use external auditors’ work to eliminate certain examination procedures from their examinations—for example, verification or confirmation of the existence and valuation of institution assets such as loans, derivative transactions, and accounts receivable. The officials further said that external auditors perform these verifications or confirmations routinely as a part of their financial statement audits. But examiners rarely perform such verifications because they are costly and time consuming. GAO continues to believe that examiners should use external auditors’ work to enhance the efficiency of examinations. However, this reliance should be predicated on the examiners’ obtaining reasonable assurance that the audits have been performed in a quality manner and in accordance with professional standards. OTS’s Regulatory Handbook recognizes the limitations of examiners’ reliance on external auditors, noting that examiners “may” rely on an external auditor’s findings in low-risk areas. However, examiners are expected to conduct more in-depth reviews of the external auditor’s work in high-risk areas. The handbook also suggests that a review of the auditor’s workpapers documenting the assumptions and methodologies used by the institution to value key assets could assist examiners in performing their examinations. In the case of Superior Bank, the external auditor, Ernst & Young, one of the “Big Five” accounting firms, provided unqualified opinions on the bank’s financial statements for years. In a January 2000 meeting with Superior Bank’s Audit Committee to report the audit results for the fiscal year ending June 30, 1999, Ernst & Young noted that “after running their own model to test the Bank’s model, Ernst & Young believes that the overall book values of financial receivables as recorded by the Bank are reasonable considering the Bank’s overall conservative assumptions and methods.” Not only did Ernst & Young not detect the overvaluation of Superior’s residual interests, the firm explicitly supported an incorrect valuation until, at the insistence of the regulators, the Ernst &Young office that had conducted the audit sought a review of its position on the valuation by its national office. Ultimately, it was the incorrect valuation of these assets that led to the failure of Superior Bank. Although the regulators recognized this problem before Ernst & Young, they did not do so until the problem was so severe that the bank’s failure was inevitable. FDIC raised serious concerns about Superior’s operations at the end of 1998 based on its off-site monitoring and asked that an FDIC examiner participate in the examination of the bank that was scheduled to start in January 1999. At that time, OTS rated the institution a composite “1.” Although FDIC’s 1998 off-site analysis began the identification of the problems that led to Superior’s failure, FDIC had conducted similar off-site monitoring in previous years that did not raise concerns. During the late 1980s and early 1990s, FDIC examined Superior Bank several times because it was operating under an assistance agreement with FSLIC. However, once Superior’s condition stabilized and its composite rating was upgraded to a “2” in 1993, FDIC’s review was limited to off-site monitoring. In 1995, 1996, and 1997, FDIC reviewed the annual OTS examinations and other material, including the bank’s supervisory filings and audited financial statements. Although FDIC’s internal reports noted that Superior’s holdings of residual assets exceeded its capital, they did not identify these holdings as concerns. FDIC’s interest in Superior Bank was heightened in December 1998 when it conducted an off-site review, based on September 30, 1998 financial information. During this review, FDIC noted—with alarm—that Superior Bank exhibited a high-risk asset structure. Specifically, the review noted that Superior had significant investments in the residual values of securitized loans. These investments, by then, were equal to roughly 150 percent of its tier 1 capital. The review also noted that significant reporting differences existed between the bank’s audit report and its quarterly financial statement to regulators, that the bank was a subprime lender, and had substantial off-balance sheet recourse exposure. As noted earlier, however, the bank’s residual assets had been over 100 percent of capital since 1995. FDIC had been aware of this high concentration and had noted it in the summary analyses of examination reports that it completed during off-site monitoring, but FDIC did not initiate any additional off-site activities or raise any concerns to OTS until after a 1998 off-site review that it performed. Although current guidance would have imposed limits at 25 percent, there was no explicit direction to the bank’s examiners or analysts on safe limits for residual assets. However, Superior was clearly an outlier, with holdings substantially greater than peer group banks. In early 1999, FDIC’s additional off-site monitoring and review of OTS’s January 1999 examination report—in which OTS rated Superior a “2”— generated additional concerns. As a result, FDIC officially downgraded the bank to a composite “3” in May 1999, triggering higher deposit insurance premiums under the risk-related premium system. According to FDIC and OTS officials, FDIC participated fully in the oversight of Superior after this point. Communication between OTS and FDIC related to Superior Bank was a problem. Although the agencies worked together effectively on enforcement actions (discussed below), poor communication seems to have hindered coordination of supervisory strategies for the bank. The policy regarding FDIC’s participation in examinations led by other federal supervisory agencies was based on the “anticipated benefit to FDIC in its deposit insurer role and risk of failure the involved institution poses to the insurance fund.” This policy stated that any back-up examination activities must be “consistent with FDIC’s prior commitments to reduce costs to the industry, reduce burden, and eliminate duplication of efforts.” “The FDIC’s written request should demonstrate that the institution represents a potential or likely failure within a one year time frame, or that there is a basis for believing that the institution represents a greater than normal risk to the insurance fund and data available from other sources is insufficient to assess that risk.” “The FDIC’s off-site review noted significant reporting differences between the bank’s audit report and its quarterly financial statement to regulators, increasing levels of high-risk, subprime assets, and growth in retained interests and mortgage servicing assets.” Because of these concerns, FDIC regional staff called OTS regional staff and discussed having an FDIC examiner participate in the January 1999 examination of Superior Bank. OTS officials, according to internal e-mails, were unsure it they should agree to FDIC’s participation. Ongoing litigation between FDIC and Superior and concern that Superior’s “poor opinion” of FDIC would “jeopardize working relationship” with Superior were among the concerns expressed in the e-mails. OTS decided to wait for a formal, written FDIC request to see if it “convey a good reason” for wanting to join in the OTS examination. OTS and FDIC disagree on what happened next. FDIC officials told us that they sent a formal request to the OTS regional office asking that one examiner participate in the next scheduled examination but did not receive any response. OTS officials told us that they never received any formal request. FDIC files do contain a letter, but there is no way to determine if it was sent or lost in transit. This letter, dated December 28, 1998, noted areas of concern as well as an acknowledgment that Superior’s management was well regarded, and that the bank was extremely profitable and considered to be “well-capitalized.” OTS did not allow FDIC to join their exam, but did allow its examiners to review work papers prepared by OTS examiners. Again, the two agencies disagree on the effectiveness of this approach. FDIC’s regional staff has noted that in their view this arrangement was not satisfactory, since their access to the workpapers was not sufficiently timely to enable them to understand Superior’s operations. OTS officials told us that FDIC did not express any concerns with the arrangement and were surprised to receive a draft memorandum from FDIC’s regional office proposing that Superior’s composite rating be lowered to a “3,” in contrast to the OTS region’s proposed rating of “2.” However, by September 1999, the two agencies had agreed that FDIC would participate in the next examination, scheduled for January 2000. In the aftermath of Superior’s failure and the earlier failure of Keystone National Bank, both OTS and FDIC have participated in an interagency process to clarify FDIC’s role, responsibility, and authority to participate in examinations as the “backup” regulator. In both bank failures, FDIC had asked to participate in examinations, but the lead regulatory agency (OTS in the case of Superior and the Office of the Comptroller of the Currency in the case of Keystone) denied the request. On January 29, 2002, FDIC announced an interagency agreement that gives it more authority to enter banks supervised by other regulators. While this interagency effort should lead to a clearer understanding among the federal bank supervisory agencies about FDIC’s participation in the examinations of and supervisory actions taken at open banks, it is important to recognize that at the time that FDIC asked to join in the 1999 examination of Superior Bank, there were policies in place that should have guided its request and OTS’s decision on FDIC’s participation. As such, how the new procedures are implemented is a critical issue. Ultimately, coordination and cooperation among federal bank supervisors depend on communication among these agencies, and miscommunication plagued OTS and FDIC at a time when the two agencies were just beginning to recognize the problems that they confronted at Superior Bank. As a consequence of the delayed recognition of problems at Superior Bank, enforcement actions were not successful in containing the loss to the deposit insurance fund. Once the problems at Superior Bank had been identified, OTS took a number of formal enforcement actions against Superior Bank starting on July 5, 2000. These actions included a PCA directive. There is no way to know if earlier detection of the problems at Superior Bank, particularly the incorrect valuation of the residual assets, would have prevented the bank’s ultimate failure. However, earlier detection would likely have triggered enforcement actions that could have limited Superior’s growth and asset concentration and, as a result, the magnitude of the loss to the insurance fund. Table 2 describes the formal enforcement actions. (Informal enforcement actions before July 2000 included identifying “actions requiring board attention” in the examination reports, including the report dated Jan. 24, 2000.) The first action, the “Part 570 Safety and Soundness Action,”followed the completion of an on-site examination that began in January 2000, with FDIC participation. That formally notified Superior’s Board of Directors of deficiencies and required that the board take several actions, including: developing procedures to analyze the valuation of the bank’s residual interests, including obtaining periodic independent valuations; developing a plan to reduce the level of residual interests to 100 percent of the bank’s Tier 1 or core capital within 1 year; addressing issues regarding the bank’s automobile loan program; and revising the bank’s policy for allowances for loan losses and maintaining adequate allowances. On July 7, 2000, OTS also officially notified Superior that it had been designated a “problem institution.” This designation placed restrictions on the institution, including on asset growth. Superior Bank submitted a compliance plan, as required, on August 4, 2000. Due to the amount of time that Superior and OTS took in negotiating the actions required, this plan was never implemented, but it did serve to get Superior to cease its securitization activities. While Superior and OTS were negotiating over the Part 570 plan, Superior adjusted the value of its residual interests with a $270 million write-down. This, in turn, led to the bank’s capital level falling to the “significantly undercapitalized” category, triggering a PCA directive that OTS issued on February 14, 2001. The PCA directive required the bank to submit a capital restoration plan by March 14, 2001. Superior Bank, now with new management, submitted a plan on that date, that, after several amendments (detailed in the chronology in app. I), OTS accepted on May 24, 2001. That plan called for reducing the bank’s exposure to its residual interests and recapitalizing the bank with a $270 million infusion from the owners. On July16, 2001, however, the Pritzker interests, one of the two ultimate owners of Superior Bank, advised OTS that they did not believe that the capital plan would work and therefore withdrew their support. When efforts to change their position failed, OTS appointed FDIC as conservator and receiver of Superior. Although a PCA directive was issued when the bank became “significantly undercapitalized,” losses to the deposit insurance fund were still substantial. The reasons for this are related to the design of PCA itself. First, under PCA, capital is a key factor in determining an institution’s condition. Superior’s capital did not fall to the “significantly undercapitalized” level until it corrected its flawed valuation of its residual interests. Incorrect financial reporting, such as was the case with Superior Bank, will limit the effectiveness of PCA because such reporting limits the regulators’ ability to accurately measure capital. Second, PCA’s current test for “critically undercapitalized,” is based on the tangible equity capital ratio, which does not use a risk-based capital measure. Thus it only includes on-balance sheet assets and does not fully encompass off-balance sheet risks, such as those presented in an institution’s securitization activities. Therefore, an institution might become undercapitalized using the risk-based capital ratio but would not fall into the “critically undercapitalized” PCA category under the current capital measure. “PCA is tied to capital levels and capital is a lagging indicator of financial problems. It is important that regulators continue to use other supervisory and enforcement tools, to stop unsafe and unsound practices before they result in losses, reduced capital levels, or failure.” Further, PCA implicitly contemplates that a bank’s deteriorating condition and capital would take place over time. In some cases, problems materialize rapidly, or as in Superior’s case, long-developing problems are identified suddenly. In such cases, PCA’s requirements for a bank plan to address the problems can potentially delay other more effective actions. It is worth noting that while Section 38 uses capital as a key factor in determining an institution’s condition, Section 39 gives federal regulators the authority to establish safety and soundness related management and operational standards that do not rely on capital, but could be used to bring corrective actions before problems reach the capital account. The failure of Superior Bank illustrates the possible consequences when banking supervisors do not recognize that a bank has a particularly complex and risky portfolio. Several other recent failures provide a warning that the problems seen in the examination and supervision of Superior Bank can exist elsewhere. Three other banks, BestBank, Keystone Bank, and Pacific Thrift and Loan (PTL), failed and had characteristics that were similar in important aspects to Superior. These failures involved FDIC (PTL and BestBank) and the Office of the Comptroller of the Currency (Keystone). BestBank was a Colorado bank that closed in 1998, costing the insurance fund approximately $172 million. Like Superior, it had a business strategy to target subprime borrowers, who had high delinquency rates. BestBank in turn reported substantial gains from these transactions in the form of fee income. The bank had to close because it falsified its accounting records regarding delinquency rates and subsequently was unable to absorb the estimated losses from these delinquencies. Keystone, a West Virginia bank, failed in 1999, costing the insurance fund approximately $800 million. While fraud committed by the bank management was the most important cause of its failure, Keystone’s business strategy was similar to Superior’s and led to some similar problems. In 1993, Keystone began purchasing and securitizing Federal Housing Authority Title I Home Improvement Loans that were originated throughout the country. These subprime loans targeted highly leveraged borrowers with little or no collateral. The securitization of subprime loans became Keystone’s main line of business and contributed greatly to its apparent profitability. The examiners, however, found that Keystone did not record its residual interests in these securitizations until September 1997, several months after FAS No. 125 took effect. Furthermore, examiners found the residual valuation model deficient, and Keystone had an unsafe concentration of mortgage products. PTL was a California bank that failed in 1999, costing the insurance fund approximately $52 million. Like Superior Bank, PTL entered the securitization market by originating loans for sale to third-party securitizing entities. While PTL enjoyed high asset and capital growth rates, valuation was an issue. Also, similar to Superior Bank, the examiners over-relied on external auditors in the PTL case. According to the material loss review, Ernst & Young, PTL’s accountant, used assumptions that were unsupported and optimistic. An abbreviated chronology of key events is described in table 1 below. Some details have been left out to simplify what is a more complicated story. Readers should also keep in mind that ongoing investigations are likely to provide additional details at a later date.
The Federal Deposit Insurance Corporation (FDIC) has projected that the failure of Superior Bank could cost the deposit insurance fund as much as $526 million. A major reason for the failure was Superior Bank's business strategy of originating and securing subprime loans on a large scale. In addition to the concentration in risky assets, the bank did not properly value and account for the interests that it had retained in pooled home mortgages. Superior's external auditor, Ernst & Young, also failed to detect the improper valuation of Superior's retained interest until the Office of Thrift Supervision (OTS) and FDIC insisted that the issue be reviewed by the auditor's national office. Federal regulators did not identify and act on the bank's problems early enough to prevent a loss to the deposit insurance fund. Both OTS and FDIC were aware of the substantial concentration of retained interests that Superior held, but they took little action because of the apparently high level of earnings, the apparently adequate capital, and the belief that the bank's management was conservatively managing the institution.
DOT regulates tens of thousands of dangerous goods, which can include poisons, pesticides, radioactive materials, and explosives. About 20 percent of these goods may not travel by air at all. As shown in figure 1, the remainder may travel on passenger or cargo aircraft, or both. Using a United Nations classification system, DOT divides all dangerous goods into nine general classes according to their physical, chemical, biological, and nuclear properties. Most of the dangerous goods that may not travel by air at all are the most highly explosive, toxic, oxidizing, self- reactive, or flammable chemical substances or articles in their class. In addition to prohibiting some types of dangerous goods from being carried by air at all, DOT restricts the types and amounts of other dangerous goods that any individual passenger or cargo aircraft may carry. For both passenger and cargo aircraft, DOT spells out these restrictions in four ways: By name—dangerous goods that represent an unacceptable hazard on aircraft or are known to have caused an aircraft fire or explosion, such as chemical oxygen generators, are specifically forbidden by name. By hazard class and subdivision—certain subdivisions of the classes of dangerous goods are known to be highly reactive or toxic (for example, most explosives and all spontaneously combustible materials), so DOT excludes them from passenger flights. By quantities contained per outer package—DOT restricts on passenger aircraft the quantity of certain substances or the number of articles that may be present in the outermost shipping containers in the cargo hold. For example, DOT allows the carriage of up to 30 liters of certain highly flammable liquids per outer package on cargo aircraft, but imposes limits of 1 liter or less on passenger aircraft. By packaging integrity—dangerous goods must be packaged so as to protect the integrity of the shipment and safeguard against accidental leaks or spills. For passenger aircraft, whose cargo areas are divided into multiple compartments, DOT also restricts the aggregate quantities of dangerous goods that may be carried per cargo compartment. Figure 2 shows the kinds of containers in which dangerous goods typically travel in these cargo compartments. Dangerous goods permitted onboard passenger aircraft include dry ice and solvents; cargo aircraft may also carry materials such as paint or medical waste. Table 1 provides a complete listing of the nine classes of dangerous goods, their descriptions, an example for each class, and some of the restrictions DOT places on the carriage of each by type of aircraft. According to the U.S. Census Bureau’s most recent survey on the movement of hazardous goods in the United States, class 3 dangerous goods (flammable liquids, such as paint) account for the greatest portion (by weight) of the nine classes of dangerous goods shipped by air. However, the vast majority of flammable liquids travel by other modes. The percentage of total shipments made by air was greatest for radioactive materials (class 7)—just over 8 percent of the total radioactive tonnage shipped in 1997 was shipped by air. According to FAA, cargo aircraft, such as those operated by the major delivery services FedEx and United Parcel Service, Inc. (UPS), carry about 75 percent of the nation’s dangerous goods air shipments. The remaining 25 percent travel onboard passenger aircraft in cargo compartments. Ensuring the safe transportation of dangerous goods by air is a shared responsibility of federal agencies, shippers, and airlines—the success of which ultimately depends on the efforts of thousands of individuals every day. Within DOT, the following have responsibility for dangerous goods: The Research and Special Programs Administration (RSPA) regulates the transportation of dangerous goods by truck, train, ship, pipeline, and plane. It decides which materials to define as hazardous; writes the rules for packaging, handling, and carrying them; and prescribes training requirements for shippers’ and carriers’ dangerous goods employees. RSPA, along with the other DOT operating administrations that operate and manage dangerous goods programs, conducts inspections and investigations to determine compliance with dangerous goods laws and regulations for all modes of transportation and, where appropriate, initiates enforcement actions against those it finds not to be in compliance. RSPA maintains a database for closed dangerous goods enforcement actions from these operating administrations, and another database that tracks dangerous goods incidents from these operating administrations. The Office of Intermodalism, reporting to the Secretary of Transportation, is responsible for implementing recommendations from a March 2000 evaluation of DOT’s dangerous goods program, coordinating intermodal and cross-modal dangerous goods activities, and coordinating DOT-wide outreach activities. For example, in 2001, to improve awareness of dangerous goods incidents occurring during shipments, this office sent out letters to shippers most frequently identified in RSPA’s dangerous goods incident database. FAA carries out responsibilities for ensuring compliance with the rules for transporting dangerous goods by air. In addition, FAA assesses carriers’ operations and investigates dangerous goods incidents or accidents. FAA also has other responsibilities, including those relating to the prosecution and adjudication of enforcement actions against those found to have violated the dangerous goods rules. The Postal Service is both a carrier and a shipper of dangerous goods because it not only carries shipments on aircraft that it leases, but it also sends U.S. mail onboard commercial passenger and cargo airlines. As a result, the airlines carrying U.S. mail rely on the Postal Service as a first line of defense in ensuring the safety of the packages they accept for transport and in preventing the shipment of anything that should not travel by air. Shippers—whether they are businesses or individuals—have the primary responsibility for ensuring the safety of their dangerous goods shipments. They are required to train their employees to package their shipments safely and to tell the carriers to whom they deliver these shipments that they contain dangerous goods. Carriers share some of the responsibility for the safe transportation of dangerous goods. They do so by training their employees to handle these shipments properly, to identify likely instances of improper shipments (such as those containing undeclared dangerous goods), and to verify that the indirect air carriers from whom they accept consolidated cargo shipments have FAA-approved security programs in place to prevent explosive or incendiary devices from being placed onboard. Carriers are also responsible for reporting to DOT any instance of noncompliance they discover. From tragic accidents over the years and day-to-day experience in handling cargo traffic, DOT and major carriers know that shipments of undeclared dangerous goods can have disastrous consequences. The nature and frequency of such shipments—and, by extension, the amount of effort that should be put into stopping them—are difficult to estimate because of data limitations. However, the inability of commercially available screening equipment to detect many types of dangerous goods, the costs of delaying shipments to inspect them, and restrictions against opening certain packages may preclude the collection of data. Undeclared and other improper shipments of dangerous goods can pose a high risk because of the nature of air transportation. In recent years, both RSPA and FAA have expressed concern about undeclared dangerous goods shipments. In its departmentwide March 2000 evaluation of the dangerous goods program, DOT reported that the United States has a relatively good safety record, given the amounts of dangerous goods that are shipped by all modes of transportation each year. However, DOT added that the potential still remains for dangerous goods incidents with catastrophic consequences, and, even though relatively small amounts of dangerous goods travel by air (compared with other modes of transportation), a single mishap can have serious consequences. For example, FAA has reported the following incidents: In 1996, a major passenger airline carried undeclared dangerous goods— calcium hypochlorite and liquid bleach—on a flight from California to Jamaica. Upon arrival, airport personnel discovered smoke coming from the aircraft’s cargo doors and encountered toxic fumes when they opened the cargo compartment. The box of undeclared dangerous goods was leaking and burst into flames shortly after the airport personnel removed it from the cargo hold. In 1998, an undeclared shipment of electric storage batteries (considered “wet” because they contain either electrolyte acid or alkaline corrosive battery fluid) burst into flames while en route by truck to an airport, where it had been scheduled to be placed aboard a major passenger carrier’s aircraft. In 1999, a major cargo carrier transported an undeclared shipment of liquefied petroleum gas from Portland, Oregon, to New York on a regularly scheduled cargo flight. One day after arriving in New York, the package burst into flames at the carrier’s sorting facility. Three of the four major carriers we interviewed and DOT expressed concern about the safety of carrying dangerous goods. According to these three carriers, even though they discover relatively few undeclared shipments, their greatest safety concern in the air transportation of these goods is prompted by the undeclared shipments—particularly those they do not detect before accepting them. They expressed this concern over not knowing how much of the volume of undeclared dangerous goods they do not find, because these shipments present a greater risk than do those that shippers properly declare. The major cargo carriers we interviewed and the Postal Service agreed that ignorance or misunderstanding of the rules for transporting dangerous goods is by far the most common reason why shippers fail to properly declare their dangerous goods shipments. According to one carrier, in very limited instances, shippers will deliberately not declare their shipments even when they know they are breaking the rules. However, no carrier cited cost as a reason why shippers fail to declare their shipments, even though shipping costs are usually higher for dangerous goods than for nondangerous goods. An official from one carrier stated that he had never seen a case of a shipper willfully failing to properly declare a dangerous goods shipment because of cost concerns. Furthermore, at the Postal Service, it is doubtful that cost is a cause of undeclared shipments, because the Postal Service does not charge more for carrying these shipments than it does for carrying those that are not hazardous; all of the Postal Service’s charges are based on weight and class, regardless of the contents. According to a 1999 threat assessment published by DOT’s Volpe Center, three types of data that are needed to thoroughly assess the risks of carrying declared and undeclared dangerous goods by air were unavailable. These were what amounts of dangerous goods are shipped by class and division (for all modes of transportation), how often incidents related to dangerous goods involve undeclared what amounts and what types of undeclared dangerous goods are shipped by air. Without these data, the Volpe Center was limited to assessing the threat from dangerous goods instead of the risk. The danger associated with a specific item is its “threat,” while the likelihood that the threat will actually result in harm is its “risk.” Assessing risk, according to the Volpe Center, requires some indication of the likelihood that dangerous goods will be present on an aircraft—and the data to determine this likelihood were not available. Volpe Center officials attempted to find or compile data sources that would allow them to estimate the total amount of various dangerous goods that might be shipped (for example, over the course of a year), but they were unsuccessful. They found no single source of such data and were not able to piece together data sources. For example, Volpe Center staff attempted to compile data from chemical manufacturers to identify the total amounts of their products that move by air and the related distribution chain (that is, the amounts that move by other modes); this information would enable them to identify aggregate amounts of certain dangerous goods that shippers should be declaring, which would be a first step in working toward an estimate of undeclared shipments. However, the industry sources the Volpe Center consulted considered such information proprietary and would not share it. Volpe Center staff also considered assembling cargo manifest information from the airlines, because these records indicate for each flight the amounts and types of dangerous goods the aircraft is carrying. However, Volpe Center staff said the airlines informed them that these data are not in a form usable for such an analysis. Even if the manifest information were available, data on the overall amounts of dangerous goods shipments (such as the Volpe Center sought from the chemical industry) would still be necessary before this manifest information could be useful for estimating undeclared dangerous goods shipments. According to Volpe Center staff, the limitations in the amount and quality of data on dangerous goods shipments make estimating how many shipments contain undeclared dangerous goods more difficult. Our experts in applied research and methodology agreed, noting that certain “hidden populations” methods might be useful for estimating the amount of undeclared dangerous goods shipments, but only if data limitations such as those the Volpe Center identified were overcome. A Massachusetts Institute of Technology expert in transportation research with whom we met agreed that none of the known methods for estimating hidden populations would be feasible for undeclared dangerous goods. The major carriers we interviewed said they most commonly identify undeclared dangerous goods (after accepting them for shipment) when some occurrence prompts them to open a package or, in the case of the Postal Service, to set the package aside for further investigation (because the Postal Service generally cannot open such a package without a search warrant). Most often, this happens when a package leaks, spills, breaks open, or emits an odor, and the carrier or Postal Service employees identify the occurrence as potentially a dangerous goods incident. One carrier also indicated that occasionally packages open as a result of handling or must be opened when they lose their address labels. In some of these instances, the company has discovered undeclared dangerous goods. This same company also noted that, on rare occasions, it learns of undeclared dangerous goods from informants—employees of either the company that shipped the package or competitors of that company. The carriers we interviewed reported that, although they have the consent of shippers to open packages that have been accepted for shipment, they seldom discover undeclared dangerous goods. Although they did not cite a specific percentage, they described shipments of undeclared dangerous goods as “very rare” and “a handful.” The numbers are believed to be similarly small for the Postal Service—officials estimated that declared dangerous goods represent less than one-tenth of 1 percent of their shipments, and the percentage of these shipments that is undeclared is “very small.” The Volpe Center reported in a 1999 threat assessment that undeclared dangerous goods shipments made up about 0.05 percent of the shipments of several large cargo carriers, but this estimate was based on the recollections of the carriers of how many incidents they typically report to RSPA. Because estimates by the Volpe Center, major carriers, and the Postal Service are based on reported incidents or memory, they are incomplete. Moreover, these estimates refer only to those undeclared shipments that resulted in dangerous goods incidents—they do not include undeclared shipments that never gave carriers cause to open them. As a result, according to the Volpe Center, there are no valid figures for the numbers of dangerous goods shipments that do not comply with regulations for transportation by air. Additionally, when a carrier reports an incident to DOT, RSPA does not currently require the carrier to report whether the shipper properly declared the dangerous goods. Consequently, the estimates of undeclared shipments reported by the Volpe Center and by carriers to us may not include all of the incidents carriers discovered, because the estimates are based on memory and are therefore subject to error. RSPA plans to remedy this limitation by requiring carriers to report whether dangerous goods shipments involved in incidents were declared or undeclared. To do so, RSPA is modifying its incident-reporting paperwork (Form 5800.1) to more systematically collect and analyze information on undeclared shipments. RSPA expects to complete this and other ongoing revisions to its incident-reporting form by spring 2003. Technological limitations complicate efforts to estimate the incidence of undeclared dangerous goods shipments. Ideally, technologies generally considered to be less intrusive, such as X-ray or explosives-detection equipment, could be used to identify and characterize undeclared shipments. The Transportation Security Administration (TSA) is currently using this equipment to screen passenger carry-on and checked baggage for weapons and explosives, and, under the Aviation and Transportation Security Act, TSA must ensure that a system is in operation to screen, inspect, or otherwise provide for the security of all air cargo to be transported in all cargo aircraft as soon as practicable. However, X-ray and explosives-detection equipment is not designed to detect many types of dangerous goods. In the future, technology may enable the rapid, less intrusive screening of packages, but in the near term, opening packages remains the best way to obtain information on the nature and frequency of undeclared shipments. Economic obstacles—particularly the costs of opening packages after accepting them—also make it difficult to estimate the nature and frequency of undeclared dangerous goods shipments. According to each of the major carriers we interviewed, the volume of cargo that these airlines carry each day is tremendous. For example, the carriers stated that they carry from at least 1.3 million to more than 2 million shipments each night, a small fraction of which contain dangerous goods. Because the carriers typically guarantee delivery on nearly all of the shipments they carry (such as within 24 hours or 2 business days), anything that slows their ability to move shipments could compromise their ability to meet their guarantees to their customers and, as a result, hurt their competitive position in their industry. Although the carriers we interviewed told us that they obtain the consent of shippers to open packages, they also said they seldom do open packages. Carriers and an association representing cargo and passenger airlines stressed that they are not in the business of opening packages, particularly when shippers are primarily responsible for ensuring the integrity and proper declaration of those packages. The carriers indicated that they have confidence in and place a great, ongoing emphasis on their up-front screening to prevent shippers from offering them undeclared dangerous goods in the first place. Opening packages without probable cause to do so would also be costly to the carriers because they would be responsible for repackaging anything they found to be properly declared— and dangerous goods require special, more expensive packaging than other shipments. Although carriers remain concerned about the possibility of undeclared shipments they may miss, to date the frequency with which they discover shipments of undeclared dangerous goods does not, in their view, justify a step as disruptive and costly as systematically opening a random or targeted selection of shipments. Because the Fourth Amendment to the Constitution prohibits unreasonable searches and seizures and neither DOT nor the Postal Service has obtained the consent of owners to have their packages opened for inspection, neither agency may conduct or require random or targeted intrusive inspections of domestic cargo shipments to look for undeclared dangerous goods. Although FAA may remove a package from an aircraft and take such emergency actions if it reasonably believes that the package presents an immediate threat, it has no authority, generally, to open and inspect a package without a warrant or without the owner’s consent. The Postal Service may inspect Parcel Post packages. However, packages sent as First Class or Express mail traveling by air may not be inspected. The mail classification schedule recommended by the Postal Rate Commission and adopted by the Postal Service does not distinguish between letters and packages, treating both as “sealed against inspection” and protected by the Fourth Amendment. Thus, these packages are protected to the same extent as letters, and all First Class and Express mail is treated as protected by the Fourth Amendment. To obtain more information on the nature and frequency of undeclared dangerous goods in air transport, FAA has teamed with the U.S. Customs Service, which has the authority to inspect and search international cargo (imports and exports). Specifically, the Customs Service can and does randomly open and inspect international cargo for purposes such as ensuring that shippers have paid the proper tariffs. Most recently, in June and July 2000, the U.S. Customs Service and FAA together conducted inspections of passenger carry-on and checked bags and cargo aboard flights that were entering or departing from the United States at 19 domestic airports. This series of inspections found that 8 percent of targeted cargo shipments (those whose tariff codes indicated that their contents might be hazardous) contained undeclared dangerous goods, 1 percent of passenger carry-on bags contained undeclared dangerous just under 0.5 percent of passenger checked baggage contained undeclared dangerous goods. The undeclared dangerous goods in the cargo shipments included flammable liquids, fuel control units, aerosols, fire extinguishers, and devices powered by flammable liquid. In the passengers’ checked and carry-on bags, the Customs-FAA teams found aerosols, lighters, flammable liquids, safety matches, compressed flammable gases, and automotive batteries. The Customs-FAA team randomly selected the passenger baggage it inspected, but for the cargo, the team matched tariff codes for commodity imports and exports with a dangerous goods trigger list to determine which shipments to inspect. DOT has tried several times to clarify and expand its authority to inspect and open certain packages when its inspectors suspect a violation of the dangerous goods regulations. In its 1997, 1999, and 2001 reauthorization proposal, DOT sought the authority to access, open, examine, and, if need be, remove a package from transportation if it had an objectively reasonable and articulable belief that the package might contain undeclared dangerous goods. According to DOT, this authority, which is specific to all modes, would require its officers or inspectors to have a “particularized and objective basis” for suspecting a violation, such as a pattern of shipping undeclared dangerous goods, in order to open an unmarked package. DOT further stated that this enhanced authority would enable it to more effectively detect potential violations and to ensure that it took the appropriate remedial actions. According to DOT officials, its reauthorization proposal has not been enacted for reasons unrelated to the merits of its request for additional inspection authority. Because DOT’s reauthorization proposal applies equally to all modes of transportation, it would, if approved, allow DOT to follow up on problem shippers across the modes. However, the proposal would also extend the government’s inspection authority without regard to the differences inherent in transporting dangerous goods by different modes. The same distinctions between air and the other modes that justify more stringent regulations for transporting dangerous goods by air might also justify greater inspection authority for packages shipped by air. A primary objective of DOT’s reauthorization proposal has been to improve the ability of its inspectors to monitor and enforce the dangerous goods regulations. The proposal has not been designed to obtain better information about the nature and frequency of undeclared air shipments. Because it would require a “particularized and objective basis” for opening packages, it would not allow DOT to identify a random sample of packages and conduct inspections whose results could be generalized to all packages in air transport. Thus, its usefulness as a tool for gathering data to estimate the nature and frequency of undeclared air shipments and to profile and target violators would be limited. DOT officials agree that their proposal would not generate statistically valid data, and they have indicated their willingness to modify the proposal so that it would yield more useful information. An alternative to DOT’s proposal, based on the premise that additional and perhaps unique measures are needed to protect air commerce, would require that shippers consent to DOT’s opening packages shipped by air for inspection. This would allow the department to select and open a random sample of packages in order to gather statistically valid data on undeclared air shipments. To prevent dangerous goods shipments from compromising aviation safety, the federal government relies on regulation, research, and outreach, while private industry depends on policies for dealing with known shippers, other restrictions, training, and sanctions. Federal regulations provide a framework for transporting dangerous goods safely by air. As discussed in the background section of this report, these regulations define dangerous goods, identify those that may and may not travel by air, and specify how the materials are to be packaged, handled, and carried. In addition, the regulations prescribe initial and recurrent training for shippers’ and carriers’ employees, and require shippers and carriers to test their employees’ understanding of the material covered in the training. The training, which is designed to increase dangerous goods employees’ safety awareness and to reduce the frequency of dangerous goods incidents, is important because insufficient understanding of the rules is often a factor contributing to such incidents. For example, in 17 of 25 dangerous goods enforcement cases we reviewed involving businesses, FAA identified employees’ lack of training as a contributing factor. To monitor the effectiveness of its regulations in promoting safety, RSPA collects information on dangerous goods incidents occurring in the air, water, rail, and truck modes through its Form 5800.1. Nonetheless, the form is not designed to collect all the information that would be useful in monitoring the effectiveness of DOT’s dangerous goods regulations. As previously noted, the form does not ask whether a problem shipment was declared or undeclared—a key question in assessing effectiveness. In addition, the form does not include data fields that precisely identify the different types of packaging deficiencies. While the form has space for written comments, there is no mechanism for standardizing and entering the information from the comments into DOT’s databases. RSPA is revising the form to overcome these limitations. Once carriers begin collecting information on dangerous goods incidents using this revised form, better information on the incidence of undeclared shipments and reasons for packaging deficiencies should be available to FAA and the other operating administrations. In the course of such monitoring, DOT sometimes identifies safety issues that require further research. For example, DOT is currently evaluating ways in which it will strengthen the regulations for shipping batteries, because its analysis indicated that the existing dangerous goods regulations for these shipments may not be sufficient. Beginning in the early 1990s, FAA identified a number of incidents associated with batteries, particularly lithium batteries, aboard aircraft in which the batteries caused fires, smoke, or extreme heat—precisely the kind of effects that make dangerous goods dangerous. In response to these and other concerns, RSPA has taken a number of actions designed to improve the regulations for the transportation of lithium batteries. FAA’s monitoring of reports on incidents involving dangerous goods also led to further work on packaging standards. In examining nearly 3,000 reports from 1998 and 1999, FAA found that 60 percent of the incidents involved properly declared shipments, indicating that the shipments complied with the existing packaging standards. Yet just over half (873) of these properly declared shipments had problems because their packaging failed—that is, their closures or seals leaked. These data prompted FAA to attempt to determine the adequacy of packaging standards for air transportation and the likely causes of leaking closures and seals. Observing an increase in the number of package failures in the past 3 years, FAA questioned whether the existing test methods simulate the realistic combined effects of pressure, temperature, and vibration. As a result, FAA contracted with Michigan State University to study packaging in air transportation. The results of that study, which FAA recently received, indicate that closures are continuing to leak in packages marked as complying with existing packaging standards. Subjecting packages to both high altitude and vibration resulted in a package failure rate of 50 percent. RSPA is reviewing these results. To help prevent dangerous goods incidents aboard passenger aircraft, FAA and RSPA conduct outreach to the public. For example, FAA worked with RSPA to develop for air travelers a brochure that lists items prohibited in passenger baggage (see app. I). The brochure also explains that in-flight variations in temperature and pressure can cause seemingly harmless items to leak or generate toxic fumes during air travel. RSPA requires that signs be posted in airport terminals and at check-in counters listing items prohibited in air travel, some of which passengers may not recognize as hazardous in air transportation. In addition, FAA has placed kiosks with information on dangerous goods at 24 major airports to better inform the general public about items that are considered hazardous onboard aircraft. The Postal Service also does consumer outreach to better inform the public about the materials that may and may not be sent through the mail. According to Postal Service officials, there are posters in all of its facilities that warn customers about shipping restricted dangerous goods. In addition, for any customer who ships or requests information about shipping dangerous goods, Postal Service retail employees provide an informational brochure summarizing the applicable rules as well as the shipper’s responsibilities. To prevent undeclared dangerous goods shipments, major carriers limit their business to known shippers and may impose other restrictions. They also train their employees to be a first line of defense against undeclared shipments, and may apply sanctions to shippers who have violated dangerous goods regulations. To ensure that they are dealing with legitimate businesses that are more likely to properly train their employees to comply with dangerous goods rules, the major carriers we interviewed rely on TSA’s “known shipper” requirements or establish formal, contractual relationships with their shippers that mirror the known shipper requirements. According to officials of one of the carriers, the steps involved in becoming a known shipper reduce to an acceptable level the risk that the shipper presents to the carrier. By contrast, the carriers have found, casual or one-time shippers are more likely to offer undeclared dangerous goods for shipment. Three of the four carriers said they try to limit their business with casual or one-time shippers and do not advertise to them. Rather, two of the carriers said, they target business-to-business shippers that typically have experience with shipping high volumes of dangerous goods and may have long-standing relationships with the carriers. The fourth carrier said that it does not accept dangerous goods from casual shippers at all and, for other shippers, requires the establishment of a dangerous goods– shipping agreement, or contract, that spells out obligations for shippers, such as recurring employee-training requirements. Officials of this carrier believe that these contractual obligations reduce the incidence of undeclared shipments. Besides limiting their business primarily to known shippers, the major carriers we interviewed may try to prevent undeclared shipments by limiting the types of materials they will carry and the places where they will accept dangerous goods shipments. Three of the four carriers said they accept fewer types of dangerous goods for shipment than DOT authorizes to travel by air. For example, the carriers said they refuse to carry materials such as toxic or infectious substances, certain explosives, and organic peroxides. In addition, one of the carriers said it would not accept dangerous goods shipments at its retail establishments. This carrier said it would accept such shipments only when its own drivers picked them up from established customers. This carrier’s policy is designed to screen out the casual shippers that might use its retail establishments. According to the carrier, this policy also allows it to rely on its drivers’ experience with dangerous goods shipments, their training, and their long- standing relationships with established customers as a first line of screening against undeclared shipments of dangerous goods. While the Postal Service cannot limit its business to known shippers, it accepts fewer dangerous goods for shipment than DOT authorizes to travel by air. In general, the Postal Service limits the dangerous goods it will accept for shipment to certain quantities of consumer commodities that typically present a limited hazard in transportation because of their form, quantity, or packaging. In addition to limiting what dangerous goods it will carry, the Postal Service, as part of its aviation mail security program, requires customers to bring any package weighing 16 ounces or more to a post office for shipment. The intent of this program is to prevent explosives in the mail, but Postal Service officials indicated it has a residual benefit in helping to prevent undeclared shipments of dangerous goods. Specifically, by requiring customers to bring packages that weigh 16 ounces or more to a post office for shipment, Postal Service employees can inspect packages, ask questions about their contents to determine whether they contain anything prohibited, and ensure proper handling for packages containing dangerous goods that may be mailed. The major carriers we interviewed emphasized that the training they provide for their employees is a key component in their efforts to prevent shippers from offering undeclared dangerous goods, supplementing their use of restrictions or the known shipper requirements to guard against such shipments. This training provides information on dangerous goods requirements and procedures for drivers and employees who handle, sort, and load shipments. Through this training, the carriers expect that employees throughout their distribution chain will be able to identify problems such as declaration paperwork that is missing information about the contents of a package labeled as dangerous. Carriers rely particularly on their drivers to draw on their training to, in effect, extend the known shipper concept to their day-to-day interactions with shippers. Training, plus a working knowledge of a company’s established customers, helps the drivers detect inadvertent failures to properly declare a shipment. For example, a driver picking up a shipment from a customer who typically sends some dangerous goods would be expected to raise questions if the customer did not label or declare any of the packages as dangerous. In such an instance, the shipper may have made a mistake or forgotten to declare the dangerous goods. The Postal Service trains its retail employees, who accept packages from the public, to screen packages and prevent those with undeclared or improperly packaged dangerous goods from entering the mail system. According to Postal Service officials, as of August 2002, the agency had trained all 131,000 of its retail employees in procedures for preventing the acceptance of any package containing prohibited materials. These procedures include (1) asking shippers a series of questions about the contents of their packages, including whether the packages contain anything hazardous; (2) visually inspecting packages to look for signs of problems, such as leaks, the lack of a return address, or markings indicating that a package contains something a shipper may not know is hazardous; and (3) referring to a reference guide for assistance in answering shippers’ questions about items that may or may not be permissible in the U.S. mail. (See app. II for a summary of DOT’s dangerous goods classes and the materials or quantities from each that are allowed in the U.S. mail.) While the retail employees may be the first to deal with shipments entering the mail system, the Postal Service also provides dangerous goods training to its non-retail employees (such as postal inspectors or employees at business mail entry units), who also handle or carry dangerous goods or respond to incidents involving them. According to the official responsible for the Postal Service’s dangerous goods program, the agency has to rely on its retail employees to screen out unacceptable items because it has limited authority to open mail that has been accepted for shipment. These officials believe that face-to-face questioning reduces the anonymity associated with depositing a letter in a mailbox. And reducing anonymity, this official says, improves their confidence in shippers’ statements about the contents of packages. To test its retail employees’ performance in specific aspects of customer service, the Postal Service has an ongoing “mystery shopper” program in which its employees pose as customers. In late 2001, the Postal Service began including in the mystery shopper tests a determination of whether the retail employees were following requirements for asking the question about dangerous goods. To date, the Postal Service’s tests indicate that the retail employees asked the required screening question 69 percent of the time. When the retail employees failed to ask the dangerous goods question, Postal Service officials said they provided feedback and retrained the employees. These officials also told us that they provided this feedback to each postal office manager and have incorporated targets for improved performance on the mystery shopper tests into the managers’ performance goals. Officials say these results are slowly and steadily improving. A shipper who fails to properly declare a dangerous goods shipment can face serious consequences from a major carrier, particularly if the shipper is a business or other operation with an ongoing need for the carrier’s services. Two of the major carriers we interviewed may, depending on the seriousness of the violation, require a shipper to provide additional remedial training in shipping dangerous goods; apply more stringent terms for accepting shipments from the shipper; or, in more serious instances, permanently terminate the business relationship with the shipper. Officials from one of the carriers stated that their company’s requirements for remedial training in these instances exceed DOT’s requirements for shippers. Similarly, officials from another carrier told us that an inadvertent violation of the rules governing the declaration of dangerous goods would, in most cases, result in a minimum suspension of 60 days, pending the shipper’s completion of training or any other steps the carrier chose to require before again accepting packages from that shipper. This same carrier’s officials said that when they suspect that a shipper may have sent undeclared dangerous goods through their system, they will begin an investigation to determine whether the shipper knew or should have known that it was doing so. Until the carrier completes that investigation, the shipper must agree to let the carrier’s staff open and inspect every shipment before accepting it. If this carrier determines that the shipper knowingly offered undeclared dangerous goods, it terminates its business with that shipper. To evaluate the effectiveness of and to enforce federal regulations for shipping dangerous goods by air, DOT collects data on dangerous goods incidents, monitors shippers’ and carriers’ performance, and assesses civil penalties. Within DOT, FAA is primarily responsible for enforcing the regulations for transporting dangerous goods by air. To ensure that the penalties it imposes for violations of dangerous goods regulations are appropriate to shippers’ and carriers’ complete compliance histories, FAA, together with DOT’s other affected operating administrations, is required to consider the compliance history of violators in all modes of transportation when assessing penalties against them. This guidance was difficult for FAA and others to follow because, until very recently, with the exception of RSPA, DOT’s operating administrations were not submitting their enforcement data in a timely manner to DOT’s centralized enforcement database. Finally, to further ensure that appropriate civil penalties are assessed and that similar cases are treated consistently and fairly, FAA requires that the reasons for any reduction to a recommended civil penalty be documented. Our analysis of FAA’s enforcement case files found that FAA is not always documenting its assessments. Like DOT, the Postal Service collects data on dangerous goods incidents, but it lacks DOT’s authority to assess civil penalties for violations and therefore takes few enforcement actions. Legislation proposed by DOT would allow the Postal Service to assess civil penalties. To monitor and enforce compliance with DOT’s dangerous goods regulations, FAA collects data on dangerous goods air incidents and discrepancies through its Airport and Air Carrier Information Reporting System (AAIRS). RSPA’s regulations define incidents as reportable releases of hazardous materials, including those that are unintended and unanticipated. “Discrepancies” are defined in the Hazardous Materials Regulations (HMR) as instances in which dangerous goods are found to be undeclared, misdeclared, or improperly packaged. In addition, FAA collects data on closed dangerous goods enforcement cases through its Enforcement Information System. (See app. III for more information about FAA’s and DOT’s incident and enforcement databases.) To ensure that appropriate civil penalties are assessed, FAA’s enforcement guidance requires the agency to consider the compliance history of violators across all modes of transportation. Until recently, FAA had difficulty complying with this guidance because, with the exception of RSPA, DOT’s operating administrations were not submitting their closed enforcement action data in a timely manner to a central database—the Unified Shipper Enforcement Data System (UNISHIP), maintained by RSPA. DOT developed this database in response to a 1991 GAO report. RSPA is working with DOT’s affected operating administrations to ensure the timely submission of enforcement data. On July 17, 2002, the Office of the Secretary of Transportation issued a memorandum calling for the implementation of required procedures for entering data on dangerous goods enforcement actions into UNISHIP. If the database is kept up to date, FAA inspectors can obtain compliance information by querying the central database. Our analysis of FAA’s case files indicates that FAA is not always documenting the reasons for reductions to recommended civil penalties, as its guidance requires. We found cases in which the proposed civil penalty was changed, but either no documentation or incomplete documentation was provided to explain the reasons for the reduction. An FAA official stated that it was FAA’s policy to include documentation for civil penalty changes in the case files. To help ensure that appropriate civil penalties are assessed and that similar cases are treated consistently and fairly, it is important that FAA document the reasons for any reduction to a recommended civil penalty. The enforcement process begins when FAA inspectors obtain indication of a violation (see fig. 4.). The inspector then determines whether the violation warrants administrative action (such as a warning notice or letter of correction), legal enforcement action (such as the imposition of a civil penalty), or referral for criminal prosecution. When the inspector finds that a civil penalty is appropriate, he or she must determine the amount of the civil penalty by consulting FAA’s sanction guidance policy. Legal staff in the regional office or headquarters then review the strength of the evidence, the type of enforcement action, and the amount of the civil penalty, if any. Next, a notice of proposed civil penalty is issued that is consistent with the inspector’s report and the review. The alleged violator then has an opportunity to reply to the civil penalty assessed. If the alleged violator provides convincing evidence that it did not commit the violation, FAA dismisses the case. If FAA and the alleged violator agree on an appropriate fine, FAA issues an order assessing a civil penalty that binds the violator to pay the agreed-upon amount. If no agreement is reached, the case is litigated. Figure 3. FAA’s Dangerous Goods Enforcement Process (for Civil Penalty Cases) FAA inspectors obtain evidence of a violation. The inspector then determines whether the violation calls for a civil penalty or some other enforcement action. When the inspector finds that a civil penalty is appropriate, he or she must determine the amount of the civil penalty by consulting FAA's sanction guidance policy. Legal staff in the regional office or headquarters then review the case. A notice of proposed civil penalty is issued. If the alleged violator provides convincing evidence that it did not commit these violations, FAA dismisses the case. If FAA and the alleged violator agree on an appropriate fine, FAA issues an order assessing a civil penalty that binds the violator to pay the agreed-upon amount. If no agreement is reached, the case is litigated. In 15 of the cases we reviewed, the assessed civil penalty differed from the proposed civil penalty, but FAA included either no documentation or incomplete documentation in the case files to account for the changes. For example: In 2000, the assessed civil penalty on a chemical company for not properly shipping flammable paint was reduced from $75,000 to $15,000, but no reason was provided in the file for the change. In 2000, the assessed civil penalty on a paint company for not properly shipping flammable paint was reduced from $59,500 to $37,500, but no reason was provided in the file for the change. In addition, in one case involving the shipment of an oxygen generator by an air carrier in 1997, the recommended civil penalty was reduced by 20 percent, even though oxygen generators were responsible for the ValuJet aircraft crash in 1996. This penalty was reduced for reasons that were not documented. The reduction was not consistent with the known risks of oxygen generators. The Postal Service’s standards for mailing dangerous goods are similar to DOT’s detailed specifications for packaging, marking, and labeling dangerous goods, although the mail is subject to many additional limitations and prohibitions, which are imposed by provisions of criminal statutes. Yet in contrast with DOT, which can assess civil or pursue criminal penalties for violations of its standards, the Postal Service can only pursue criminal penalties. This leads to little enforcement, because many violations are unintentional and involve situations that are inappropriate for criminal sanctions. At the same time, the high cleanup and damage costs associated with dangerous goods violations are time- consuming, and damages may be difficult to recover absent authority to assess civil penalties. For example, in a 1998 incident, the Postal Service incurred costs of $87,000 and the carrier incurred damages of $1.4 million when a Priority mail shipment containing four bottles of mercury was found to be leaking upon removal from the aircraft. Another costly incident occurred in 2000, when 3 gallons of gasoline were illegally shipped in a motorcycle gas tank and the tank leaked during the flight, requiring the plane to be taken out of service and cleaned. As part of its proposal to reauthorize the hazardous materials transportation program, DOT has included a provision that would allow the Postal Service to collect civil penalties and to recover costs and damages for dangerous goods violations. The Postal Service has been actively working with DOT, and it supports this provision. Yet others have raised concerns about possible conflicts between the Postal Service’s current law enforcement authority and its effect on fair competition between the Postal Service and other shippers. The question of whether changes should be made regarding the Postal Service’s law enforcement responsibilities continues to be discussed as the Congress and others revisit the Postal Service’s mission and roles as part of broader postal reform efforts. Without statistically valid, generalizable data on the nature and frequency of undeclared dangerous goods in air transport, DOT does not know to what extent such goods pose a threat to aviation safety, or what resources should be allocated to address that threat. Eventually, affordable diagnostic screening technologies may enable carriers and DOT to monitor dangerous goods shipments efficiently and nonintrusively. Until then, greater inspection authority would enable DOT to randomly select and open packages; gather statistically valid, generalizable data; and profile and target potential violators, thereby possibly enhancing aviation safety. A change in the law requiring that shippers consent to the inspection of packages shipped by air might help to accomplish these objectives. The legislation that DOT has proposed seeking greater inspection authority has not to date been limited to the air mode and has not been designed to obtain statistically valid data. However, the distinctions between air and the other modes that justify more stringent regulations for transporting dangerous goods by air, along with the potential benefits to aviation safety that could accrue from better data on undeclared air shipments, might warrant the development of a proposal that would enable DOT to obtain such data. The Office of the Secretary’s recent memorandum to the operating administrations, calling for the timely submission of closed enforcement action data to DOT’s centralized enforcement database, should strengthen FAA’s ability to take appropriate enforcement action against violators of DOT’s dangerous goods regulations. Provided that the operating administrations continue to follow the memorandum, FAA should be able to identify high-risk or problem entities, consider their compliance histories in all modes of transportation as its enforcement policy guidance requires, and ensure that the penalties it assesses against them are appropriate to their histories. Yet FAA still needs to do more to demonstrate that it has assessed appropriate civil penalties. Until it fully documents the reasons for its assessments, or for changes to its initial assessments, as its guidance requires, it cannot provide assurance that the penalties are appropriate or that it has handled similar cases consistently. In order to strengthen DOT’s enforcement of dangerous goods regulations, we recommend that the Secretary of Transportation determine whether the unique characteristics of air transport warrant the development of a legislative proposal that would enhance DOT’s authority to inspect packages shipped by air. Depending on the results of his determination, we further recommend that the Secretary direct the FAA Administrator to develop a legislative proposal that would require shippers to consent to the opening for inspection of packages shipped by air. Such a proposal would not only enhance FAA’s inspection authority but would also enable FAA to obtain statistically valid, generalizable data on the nature and frequency of undeclared air shipments of dangerous goods. Finally, we recommend that the Secretary direct the Administrator to ensure that FAA better communicate and enforce its requirement to document the justification for any substantial changes to an initially proposed penalty before issuing a final order assessing a penalty. We provided DOT and the U.S. Postal Service with a draft of this report for their review and comment. We met with DOT officials, including the Director of RSPA’s Office of Hazardous Materials Enforcement and the Manager of FAA’s Dangerous Goods and Cargo Security Enforcement Program, to receive their comments. The U.S. Postal Service provided comments via E-mail. DOT and the Postal Service generally agreed with our report and provided clarifying and technical comments, which we incorporated as appropriate. In our draft report, we recommended that the Secretary of Transportation direct the DOT administrations that operate and manage a dangerous goods program to submit their enforcement data to RSPA’s centralized database. According to our audit work, the administrations were not submitting the data and, therefore, FAA could not readily comply with its guidance requiring it to consider the compliance history of violators in all modes of transportation. However, when we discussed the draft report with DOT officials in October 2002, they provided a July 17, 2002, memorandum from the Office of the Secretary of Transportation directing the operating administrations to submit the data. In addition, in October 2002, DOT furnished evidence that three of the five administrations subsequently provided current data. We therefore deleted this recommendation from the final report. DOT agreed with our other recommendations, acknowledging that its legislative proposals seeking greater inspection authority have not been designed to obtain statistically valid data on undeclared shipments of dangerous goods. DOT further noted that FAA’s upcoming reauthorization legislation could serve as a vehicle for a proposal to expand FAA’s inspection authority, so that the agency could obtain better data on undeclared air shipments. While indicating that changes to initially proposed civil penalties sometimes occur as a result of penalty negotiations, DOT agreed that documenting the justification for changes is important for providing assurance that final penalties are appropriate and consistent. To determine what DOT, the Postal Service, and others involved in the air transport of dangerous goods know about undeclared shipments, we identified relevant studies and interviewed DOT, Postal Service, industry, and industry association officials. We reviewed the documents and reports we obtained, visited DOT’s John A. Volpe National Transportation Systems Center and FAA’s William J. Hughes Technical Center, and conducted additional interviews with the researchers who had carried out critical studies. We also interviewed officials at four of the major cargo carriers, and conducted site visits at three of their facilities. To determine the key mechanisms that the federal government and private industry have in place to prevent dangerous goods from compromising safety, we interviewed agency and industry officials and federal researchers. We also reviewed relevant reports and documents in order to identify recent developments in screening technology. To determine what DOT and the Postal Service do to foster compliance with federal regulations for shipping dangerous goods by air, we interviewed agency officials and reviewed reports and documents. We also examined FAA’s practices for assessing civil penalties by testing 30 randomly selected cases from FAA’s Enforcement Information System, which contains a database of over 2,000 cases. These cases were randomly selected to fairly represent the full range of over 2,000 cases in the database. While the number of cases we tested was too small to enable us to estimate the extent to which FAA’s enforcement strategy was followed in the entire database, these 30 cases permit us to describe the types of practices that occur at critical points in the penalty assessment process. We performed our work from September 2001 through November 2002, in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Chairman and the Ranking Minority Member of the House Committee on Transportation and Infrastructure, and the Chairman of its Subcommittee on Aviation; other appropriate congressional committees; the Secretary of Transportation; the Postmaster General, United States Postal Service; the Under Secretary of Transportation for Security, Transportation Security Administration; the Administrator, Research and Special Programs Administration; and the Administrator, Federal Aviation Administration. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please call me at (202) 512-2384 if you or your staff have any questions about the information in this report. Key contributors to this report are listed in appendix IV. Fuel, paints, lighter used everyday in the home or workplace may seem harmless; however, when transported by air, they can be very dangerous. In flight, variations in temperature and pressure can cause items fuel, scuba tanks, propane tanks, CO cartridges, self-inflating rafts to leak, generate toxic fumes or start a fire. You must declare your hazardous materials to the airline, air package carrier, or U.S. Postal Service. Violators of Federal Hazardous Materials Regulations (49 CFR Parts 171-180) may be subject to a civil penalty of up to $25,000 for each violation and, in appropriate cases, a criminal penalty of up to $500,000 and/or imprisonment of up to 5 years. FAA collects data on dangerous goods air incidents, discrepancies, and enforcement actions through two databases. Its Airport and Air Carrier Information Reporting System (AAIRS) collects basic incident and discrepancy information such as the mode, date, and location of the incident or discrepancy, the carrier and shipper involved, the hazard class of the spilled material, and the consequences of the incident or discrepancy. (See table 1.) FAA’s Enforcement Information System (EIS) collects information on closed dangerous goods enforcement cases. It contains data such as the incident date, the regulations violated, the sanction initially recommended, and the final sanction. These enforcement data are used to monitor and enforce compliance with DOT’s dangerous goods regulations. RSPA collects dangerous goods incident and enforcement data through two databases. Its Hazardous Materials Incident Reporting System (HMIRS) collects dangerous goods incident information across all transportation modes, not just the air mode. This information is similar to that collected in FAA’s AAIRS database, but it does not include discrepancies. RSPA tracks closed hazardous materials enforcement cases through its Unified Shipper Enforcement System (UNISHIP). This database tracks closed enforcement actions across all transportation operating administrations, not simply air. RSPA collects data on dangerous goods incidents from all transportation modes through DOT Form F 5800.1, which captures basic information on incidents such as the mode, date, and location of the incident; the carrier and shipper involved; the hazard class and shipping name of the spilled material; and the consequences of the incident (including deaths, injuries, product loss, and damage). RSPA uses the data and the information it collects on dangerous goods incidents to (1) evaluate the effectiveness of existing regulations, (2) assist in determining the need for regulatory changes to cover changing transportation safety problems, and (3) determine major problem areas so that attention can be more suitably directed to them. In addition, both the government and industry use this dangerous goods incident information to chart trends and identify training inadequacies and packaging deficiencies. In addition to RSPA, UNISHIP serves the enforcement programs of the Federal Aviation Administration, the Federal Railroad Administration, the Federal Motor Carrier Safety Administration, the U.S. Coast Guard, and the Inspector General by providing a history of compliance for the companies contained in the system. In addition to those named above, Elizabeth R. Eisenstadt, Arthur L. James, Bert Japikse, David Laverny-Rafter, Bill MacBlane, Kieran McCarthy, Richard Scott, and Katherine Wulff made key contributions to this report.
When shipments of dangerous goods (hazardous chemical substances that could endanger public safety or the environment, such as flammable liquids or radioactive materials) are not properly packaged and labeled for air transport, they can pose significant threats because there is little room for error when something goes wrong in flight. To better understand the risks posed by improper ("undeclared") air shipments, we assessed what is known about their nature and frequency, what key mechanisms are in place to prevent their occurrence, and what the Department of Transportation (DOT) and the Postal Service do to enforce federal regulations for shipping dangerous goods by air. Little is known about the nature and frequency of undeclared shipments of dangerous goods. While major carriers and the Postal Service believe such shipments are rare, their belief is based mainly on inspections of problem shipments, such as those that leak. Statistically valid, generalizable data are not available and would be difficult to obtain, not only because more inspections would entail costly delays for carriers but also because Constitutional protections limit DOT's and the Postal Service's inspection authority. DOT is seeking greater authority to open potentially problematic shipments for inspection, but its efforts are not limited to air transport and would not enable DOT's Federal Aviation Administration (FAA) to obtain statistically valid, generalizable data on the nature and frequency of undeclared air shipments. A change in the law requiring that shippers consent to the opening of packages for inspection might be appropriate for air transport and would enable FAA to obtain such data. FAA could then identify the resources and actions needed to address the problem. Federal regulations create a framework for transporting dangerous goods safely, and outreach to shippers and carriers helps to prevent undeclared shipments. Private industry does business primarily with "known shippers" (those that have shown they comply with the regulations). The Postal Service cannot restrict its business to known shippers, but it requires customers to bring packages weighing 16 ounces or more to a post office for screening. Carriers and the Postal Service both train their employees to screen for undeclared shipments. The Postal Service and FAA monitor and enforce compliance with federal regulations for transporting dangerous goods by air. However, the Postal Service cannot fine violators and seldom takes criminal action, since most violations are inadvertent. FAA's enforcement guidance calls for documenting the reasons for any changes in the fines its inspectors initially propose. GAO's review of enforcement case files indicates that the reasons for changes were not always documented. FAA attributes some changes to the results of penalty negotiations. Because FAA is not always following its guidance, it cannot ensure that its fines are appropriate or consistent.
DOD has increasingly relied on contractors to provide logistics support for weapon system maintenance. These logistics support arrangements have taken various forms. In fiscal year 1998, DOD directed the armed services to pursue logistics support “reengineering” efforts with contractors to achieve cost savings and improve efficiency. A 1999 DOD study identified 30 pilot programs to test logistics support concepts that placed greater reliance on the private sector. Some of the pilot programs involved performance-type arrangements that were subsequently converted to, or designated as, performance-based logistics contracts. DOD’s Quadrennial Defense Review Report advocated the implementation of performance- based logistics, with appropriate metrics, to compress the supply chain by removing steps in the warehousing, distribution, and order fulfillment processes; reducing inventories; and reducing overhead costs while improving the readiness of major weapon systems and commodities. Over the last few years, DOD has issued guidance on the implementation of performance-based logistics. In November 2001, the Office of the Deputy Under Secretary of Defense issued guidance recommending that program managers conduct a sound business case analysis to decide whether they should implement performance-based logistics for new systems and major acquisitions for already fielded systems. In an August 2003 memorandum to the military departments, the Under Secretary of Defense (Acquisition, Technology and Logistics) stated that DOD should continue to increase its use of performance-based logistics acquisitions. On February 4, 2004, the Deputy Secretary of Defense (1) directed the Under Secretary of Defense (Acquisition, Technology and Logistics), in conjunction with the Under Secretary of Defense (Comptroller), to issue clear guidance on purchasing logistics support using performance criteria and (2) directed each service to provide a plan to aggressively implement performance-based logistics for current and planned weapon system platforms. Then, based on recommendations in our August 2004 report, the Under Secretary of Defense (Acquisition, Technology and Logistics) issued a memorandum reemphasizing that the use of this type of support strategy was intended to optimize weapon system availability while minimizing costs and the logistics footprint and may be applied to weapon systems, subsystems, and components. The memorandum also provided specific definitions of performance metrics to be used. DOD describes performance-based logistics as the process of (1) identifying a level of performance required by the warfighter and (2) negotiating a performance-based arrangement between the government and a contractor or government facility to provide long-term total system support for a weapon system at a fixed level of annual funding. Instead of buying spare parts, repairs, tools, and data in individual transactions, DOD program offices that use a performance-based logistics arrangement buy a predetermined level of performance that meets the warfighter’s objectives. Although established performance measures should be tailored to reflect the unique circumstances of each performance-based logistics arrangement, the measures are expected to support five general objectives: (1) percentage of time that a weapon system is available for a mission (operational availability); (2) percentage of mission objectives met (operational reliability); (3) operating costs divided by a specified unit of measure (cost per unit usage); (4) size or presence of support required to deploy, sustain, or move a weapon system (logistics footprint); and (5) period of time that is acceptable between the demand or request for support and the satisfactory fulfillment of that request (logistics response time). Currently, a DOD task force is refining these objectives into DOD standard performance definitions to be used by program offices in every service when preparing performance-based logistics arrangements. DOD guidance recommends that program offices prepare a business case analysis prior to adopting a performance-based logistics approach to support a weapon system. The aim of the business case analysis is to justify the decision to enter into a performance-based logistics contract. The business case analysis is to include cost savings that are projected as a result of using a performance-based logistics approach and the assumptions used in developing the business case analysis. Furthermore, DOD guidance states that program offices should update their business case analyses at appropriate decision points when sufficient cost and performance data have been collected to validate the assumptions used in developing the business case analyses, including the costs of alternative approaches, projected cost savings, and expected performance levels. Further, GAO Internal Control Standards state that it is necessary to periodically review and validate the propriety and integrity of program performance measures and indicators. Also, actual performance data should be continually compared against expected or planned goals, and any difference should be analyzed. Additionally, management should have a monitoring strategy that emphasizes to program managers their responsibility for internal controls (i.e., to review and validate performance measures and indicators) and that includes a plan for periodic evaluation of control activities. DOD program offices could not demonstrate that their use of performance- based logistics arrangements had achieved cost savings and performance improvements because they had not updated their business case analysis as suggested by DOD guidance. Specifically, of the 15 DOD program offices, only 1 updated its business case analysis to validate assumptions concerning cost and performance. Other DOD program offices had not updated their business case analysis in part because they lacked reliable contractor cost and performance data. The program offices typically relied on cost and performance data generated by contractors’ information systems without verifying that the data were sufficiently reliable to update the business case analysis. Two DOD agencies, DCMA and DCAA, have the capability to assist program offices in monitoring fixed-price performance- based contracts, verifying the reliability of contractors’ information systems, and collecting cost and performance data. None of the 15 program offices included in our review could demonstrate that use of a performance-based logistics arrangement had achieved cost savings and performance improvements. Although an updated business case analysis based on actual cost and performance data might show that cost savings and performance improvements were being achieved, only 1 of the 15 program offices had updated its business case analysis consistent with DOD guidance. Of the 15 program offices, 11 had developed a business case analysis prior to entering into a performance- based logistics arrangement. In their analysis, these program offices projected that they would achieve significant cost savings. For example, an Army program office projected total cost savings of $508.5 million, and a Navy program office projected cost savings of $29.7 million. However, only the Navy’s T-45 program office had subsequently updated its business case analysis consistent with DOD guidance to determine whether cost savings were being achieved. Realizing that the contractor was not meeting the aircraft availability performance measure, the program office reassessed its business case assumptions and found that costs per flying hour were higher than estimated because the aircraft was flying fewer hours than forecasted. As a result, the program office negotiated separate contracts for the airframes and engines, which resulted in estimated cost savings of $144 million over 5 years. Performance indicators tracked by the program offices showed that the contractors met or exceeded performance requirements. Of the 15 programs, 10 reported that performance levels exceeded contract requirements, and 5 reported that performance levels were meeting contract requirements. For example, an Army program office reported a weapon system availability rate of 99 percent, which is 7 percent higher than what was projected in the business case analysis. Similarly, a Navy program office reported a weapon system availability rate of 97 percent, which is 7 percent higher than projected. Despite the reported performance improvements, the program offices had not analyzed the performance data to validate the improvements and determine whether these improvements could be attributed directly to their use of performance-based logistics arrangements to support the weapon systems. In addition, we noted that program offices in the past reported they had also met or exceeded required levels of performance using other contractual arrangements for weapon system maintenance. Moreover, the DOD program offices reporting that performance levels were exceeding contract requirements under performance-based logistics arrangements had not determined the incremental costs associated with achieving these higher levels of performance. As a result, they had no way of knowing whether incremental costs outweighed the benefits derived from achieving performance levels in excess of requirements. Program officials did not follow DOD guidance to update and validate their business case analyses because they assumed that costs incurred under fixed-price performance-based logistics arrangements would always be lower than costs incurred under more traditional contracting arrangements, and several program officials cited a lack of reliable data needed to validate expected costs savings and improved performance. However, the experience of the T-45 program showed that it is possible for program offices to validate the assumptions in the business case analysis and to determine whether expected cost savings and performance improvements were achieved. There are also other benefits derived from validating the assumptions used in the business case analysis. Validation can provide a better understanding of costs associated with the repair and maintenance of weapon systems, ensure that proper performance metrics are in place to satisfy logistical demand, isolate incremental costs associated with achieving higher levels of performance, and make cost and performance data available for contract renegotiations in order to obtain the best value for the government. Furthermore, we did not find evidence that the Office of the Secretary of Defense had established procedures to monitor whether program offices were following its guidance to update their business case analyses. The results of these updates could be used by DOD to assess the implementation of performance-based logistics arrangements and evaluate the extent to which performance-based logistics arrangements are achieving expected benefits. DOD program offices included in our review stated that because of limitations in their own information systems, they typically relied on cost and performance data generated by the contractors’ information systems to monitor performance-based logistics contracts. Program offices acknowledged limitations in their own information systems in providing reliable data to closely monitor contractor cost and performance. Existing systems are capable of collecting some cost and performance information on performance-based logistics contracts; however, according to program officials, the systems are not capturing sufficiently detailed cost and performance information for monitoring performance-based logistics contracts. Program officials told us they had more confidence in the accuracy and completeness of contractor systems than in their legacy systems. The program offices, however, had not determined whether the contractor-provided data were sufficiently reliable to update their business case analyses. As a result, the program offices did not have the reliable data they needed to validate the assumptions used in the business case analysis and to determine whether their performance-based logistics arrangements were achieving expected cost savings and improved performance. As we noted in a prior report on DOD’s management of depot maintenance contracting, to reduce personnel and save costs, DOD decided to rely more on contractors to manage and oversee fixed-price contracts because these contracts are considered low risk. The contractor assumes most of the risks for fixed-priced contracts, with the government taking a more limited role in monitoring these contracts. In our prior work on defense contract management, we discussed the importance of monitoring contractors’ systems to ensure the accuracy and completeness of information generated by these systems. In addition, during our review of the private sector’s use of performance-based logistics, we noted that private-sector companies that use performance-based logistics contracts, whether fixed price or cost-plus, closely monitor cost and performance information to effectively manage their contracts. These companies said they rely on their own systems and personnel to verify the cost and quality of work performed by the contractor. The private sector takes this approach (1) to ensure that expected costs under the contracts are accurate and meet the company’s reliability standards, (2) to validate the business case decision used to justify a performance-based logistics arrangement, and (3) to obtain the data necessary to renegotiate the contract. DCMA and DCAA have the capability to monitor contractor cost and performance, verify the reliability of contractor-provided data, and collect detailed cost and performance data. However, most of the DOD program offices we reviewed made limited use of these agencies’ resources because they viewed fixed-price performance-based logistics contracts to be low risk compared with other types of contracts. Before a contract is awarded, DCMA can provide advice and service to help construct effective solicitations, identify potential risk, select the most capable contractors, and write contracts that meet the needs of DOD customers. After the contract is awarded, DCMA can monitor contractors’ information systems to ensure that cost, performance, and delivery schedules are in compliance with the terms and conditions of the contracts. DCAA performs contract audits for DOD components and provides accounting and financial advisory services during contract negotiation and administration of contracts. DCMA and DCAA officials said that they have a greater role in monitoring cost information for cost-plus contracts because such contracts are considered high risk. According to DCMA and DCAA officials, their level of oversight is significantly less for fixed-priced contracts, including performance-based logistics arrangements, because DOD considers these contracts to be low risk, thereby diminishing the need for monitoring contractor performance. Without a request from program offices or specific contract clauses, DCMA and DCAA generally would not conduct periodic reviews or audits of fixed-price contracts to verify cost and performance information. DCMA and DCAA officials also said that in the past, monitoring fixed-price contracts was included in their workload, but because of a reduction in staff and streamlining of operations, they focused their efforts on contract areas that have the highest risk for cost growth. DCMA and DCAA officials said they would support increasing their role in monitoring fixed-price performance-based contracts depending on the availability of their resources. DOD is expanding its use of performance-based logistics as its preferred support strategy in support of weapon systems but has not yet demonstrated that this long-term support strategy is being effectively implemented DOD-wide. DOD guidance states that program offices, after entering into performance-based logistics arrangements, should update their original business case analysis using actual cost and performance data to validate their assumptions, but most of the program offices we reviewed had not followed this guidance, and the Office of the Secretary of Defense was not monitoring whether program offices were following the guidance. The program offices therefore could not substantiate that cost savings and performance improvements for weapon system support were being achieved through the use of performance-based logistics arrangements. Program offices also have lacked reliable cost and performance data needed to validate the results of performance-based logistics arrangements. Reliable data could be collected and analyzed by increasing oversight of these contracts with the assistance of DCMA and DCAA. To demonstrate that performance-based logistics arrangements are resulting in reduced costs and increased performance, and to improve oversight of performance-based logistics contracts, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology and Logistics) to take the following two actions: 1. Reaffirm DOD guidance that program offices update their business case analyses following implementation of a performance-based logistics arrangement and develop procedures, in conjunction with the military services, to track whether program offices that enter into these arrangements validate their business case decisions consistent with DOD guidance. 2. Direct program offices to improve their monitoring of performance- based logistics arrangements by verifying the reliability of contractor cost and performance data. The program offices may wish to increase the role of DCMA and DCAA in overseeing performance-based logistics contracts. In commenting on a draft of this report, DOD concurred with our recommendations regarding the validation of business case decisions for performance-based logistics arrangements and verification of reliability of contractor data. While DOD was generally responsive to our recommendations, specific details on how DOD planned to validate and verify contractor data were not provided. Regarding our recommendation to reaffirm guidance and develop procedures to track whether program offices validate their business case decisions, DOD stated that the department will reaffirm DOD guidance on updating the business case analysis after implementing performance-based logistics arrangements and will work with the military services to develop procedures to track whether program offices validate their business case decisions consistent with DOD guidance. With regard to our second recommendation to direct program offices to verify the reliability of contractor cost and performance data, DOD stated that it will issue guidance on verifying the reliability of contractor cost and performance data. DOD did not provide specific information on what the guidance would include nor did it indicate whether it would increase the use of DCMA or DCAA to verify the reliability of contractor cost and performance data. DOD also provided technical comments, which we have incorporated as appropriate. To determine whether DOD could demonstrate that cost savings and improved performance were being achieved through the use of performance-based logistics arrangements, we collected and analyzed data on 15 weapon system programs identified by the Office of the Secretary of Defense and the military services as programs that have successfully used performance-based logistics arrangements. The 15 programs are listed in table 1. We reviewed DOD and service policies, procedures, and guidance related to the use of performance-based logistics and met with program officials to discuss how their performance-based logistics contracts were structured and managed and how these contracts were validated to ensure that cost savings and improved performance were being achieved as a result of using performance-based logistics. We also obtained and analyzed available documentation, including business case analyses, contracts, and related files. We did not assess the methodology program offices used to prepare their business case analyses or the quality of these analyses. We discussed with program officials the systems they used to monitor contractor cost and performance. We also interviewed officials from the Office of the Secretary of Defense and military department headquarters to discuss implementation of performance-based logistics, lessons learned, and the benefits derived from using performance-based logistics approaches and practices. To determine how private-sector companies ensure that cost and performance levels under a performance-based contract are as expected, we reviewed the information provided by seven companies identified in our prior report that used complex and costly equipment that had life-cycle management issues similar to military weapon systems, and outsourced some portion of their maintenance work under performance- based contracts. These seven companies consisted of six airline companies and one mining company. We contacted officials at DCMA and DCAA to determine those agencies’ roles in monitoring the costs and performance of fixed-priced contracts, including performance-based logistics contracts, how audits are requested or initiated, and the procedures for reporting the results of the audits. We are sending this report to the Chairman and Ranking Minority Member, Senate Subcommittee on Readiness and Management Support, Committee on Armed Services. We will also send copies to the Under Secretary of Defense (Acquisition, Technology and Logistics). Copies of this report will be made available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-8412 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. William M. Solis (202) 512-8412 or solisw@gao.gov. In addition to the contact named above, Thomas Gosling, Assistant Director; Thom Barger; Judith Collins; Pamela Valentine; and Cheryl Weissman were major contributors to this report.
The Department of Defense (DOD) contracts with private sector companies to perform depot maintenance of weapon systems using performance-based logistics--that is, purchasing a defined level of performance over a defined time period at a fixed cost to the government. After implementing such contracts, program offices are to validate their efficacy using cost and performance data; DOD cannot otherwise ensure cost savings and improved performance are being achieved through the use of performance-based logistics. GAO was asked to review the implementation of performance-based logistics to determine whether DOD could demonstrate cost savings and improved responsiveness from these arrangements. In conducting its review, GAO analyzed the implementation of performance-based logistics arrangements for 15 weapon system programs. DOD program offices could not demonstrate that they have achieved cost savings or performance improvements through the use of performance-based logistics arrangements. Although DOD guidance on implementing these arrangements states program offices should update their business case analysis based on actual cost and performance data, only 1 of the 15 program offices included in GAO's review had performed such an update consistent with DOD guidance. In the single case where the program office had updated its business case analysis, it determined that the performance-based logistics contract did not result in expected cost savings and the weapon system did not meet established performance requirements. In general, program offices had not updated their business case analysis after entering into a performance-based logistics contract because they assumed that the costs for weapon system maintenance incurred under a fixed-price performance-based logistics contract would always be lower than costs under a more traditional contracting approach and because they lacked reliable cost and performance data needed to validate assumptions used. Furthermore, the Office of the Secretary of Defense has not established procedures to monitor program offices to ensure they follow guidance and update the business case analysis. Additionally, program officials said because of limitations in their own information systems, they typically relied on cost and performance data generated by the contractors' information systems to monitor performance-based logistics contracts. The program offices, however, had not determined whether contractor-provided data were sufficiently reliable to update their business case analysis. Although the Defense Contract Management Agency and the Defense Contract Audit Agency are most commonly used to monitor higher risk contracts, such as cost plus contracts, they are potential resources available to assist program offices in monitoring fixed-price performance-based contracts. In doing so, these DOD agencies have the capability to verify the reliability of contractors' information systems and collect cost and performance data needed to update their business case analysis. Until program offices follow DOD's guidance and update their business case analysis based on reliable cost and performance data, DOD cannot evaluate the extent to which performance-based logistics arrangements are achieving expected benefits and being effectively implemented within DOD.
The Social Security Act requires that most workers be covered by Social Security benefits. Workers contribute to the program via wage deductions. State and local government workers were originally excluded from Social Security. Starting in the 1950s, state and local governments had the option of selecting Social Security coverage for their employees or retaining their noncovered status. In 1983, state and local governments in the Social Security system were prohibited by law from opting out of it. Of the workers in the roughly 2,300 separate state and local retirement plans nationwide, about one-third are not covered by Social Security. In addition to paying retirement and disability benefits to covered workers, Social Security also generally pays benefits to spouses of retired, disabled, or deceased workers. If both spouses worked in positions covered by Social Security, each may not receive both the benefits earned as a worker and the full spousal benefit; rather the worker receives the higher amount of the two. In contrast, until 1977, workers receiving pensions from government positions not covered by Social Security could receive their full pension benefit and their full Social Security spousal benefits as if they were nonworking spouses. At that time, legislation was enacted creating the GPO, which prevented workers from receiving a full spousal benefit on top of a pension earned from noncovered government employment. However, the law provides an exemption from the GPO if an individual’s last day of state/local employment is in a position that is covered by both Social Security and the state/local government’s pension system. In these cases, the GPO will not be applied to the Social Security spousal benefit. While we could not definitively confirm the extent nationwide that individuals are transferring positions to avoid the GPO, we found that 4,819 individuals in Texas and Georgia had performed work in Social Security-covered positions for short periods to qualify for the GPO last-day exemption. Use of the exemption may grow further as the practice becomes more rapidly institutionalized and the aging baby-boom generation begins to retire in larger numbers. SSA officials also acknowledged that use of the exemption might be possible in some of the approximately 2,300 state and local government retirement plans in other states where such plans contain Social Security-covered and noncovered positions. Officials in Texas reported that 4,795 individuals at 31 schools have used or plan to use last-day employment to take advantage of the GPO exemption. In 2002, one-fourth (or 3,521) of all Texas public education retirees took advantage of this exemption. In most schools, teachers typically worked a single day in a nonteaching position covered by Social Security to use the exemption. Nearly all positions were nonteaching jobs, including clerical, food service, or maintenance. Most of these employees were paid about $6 per hour. At this rate, the Social Security contributions deducted from their pay would total about $3 for the day. We estimate that the average annual spousal benefit resulting from these last-day transfers would be about $5,200. School officials also reported that individuals are willing to travel to take these jobs—noting one teacher who traveled 800 miles to use the last-day provision. Some schools reported that they charge a processing fee, ranging from $100-$500, to hire these workers. These fees are a significant source of revenue—last year one school district collected over $283,000 in fees. Our work shows that use of the exemption in Texas has increased since 1990, which was the earliest use reported to us. In one school district, for example, officials reported that use of the exemption grew from one worker in 1996 to 1,050 in 2002. Another school district that began offering last-day employment in 2002 had received over 1,400 applications by June of that year from individuals seeking to use the exemption. Use of the exemption is likely to grow further, according to trends in Texas teacher retirements and information from school officials. There were about 14,000 teacher retirements in 2002, as opposed to 10,000 in 2000. At one university we visited, officials have scheduled workdays for imminent retirees, through 2005, to work in covered employment, an indication of the rapid institutionalization of this practice. The GPO exemption is also becoming part of teachers’ regular retirement planning process as its availability and use is publicized by teaching associations and financial planners (via Web sites, newspapers, seminars, etc.) and by word-of-mouth. One association’s Web site we identified lists the names and telephone numbers of school officials in counties covered by Social Security and how to contact those officials for such work. A financial planner’s Web site we identified indicated that individuals who worked as little as 1 day under a Social Security-covered position to quality for the GPO exemption could earn $150,000 or more in benefits over their lifetime. In Georgia, officials in one district reported that 24 individuals have used or plan to use covered employment to take advantage of the GPO exemption. Officials told us that teachers generally agreed to work for approximately 1 year in another teaching position in a school district covered by Social Security to use the GPO exemption. These officials told us that they expect use of the exemption to increase as awareness of it grows. According to Georgia officials, their need to address a teacher shortage outweighs the risk to individual schools of teachers leaving after 1 year. Officials in fast-growing school systems reported they needed to hire teachers even if they only intended to teach for 1 year. However, some schools reported that they have had teachers leave shortly after being hired. For example, in one district, a teacher signed a 1-year contract to teach but left after 61 days, a time sufficient to avoid the spousal benefit reduction. In some of the applications for school employment we reviewed, individuals explicitly indicated their desire to work in a county covered by Social Security in order to obtain full Social Security spousal benefits. Use of the GPO exemption might be possible in other plans nationwide. SSA officials told us that some of the approximately 2,300 state and local government retirement plans—where such plans contain Social Security- covered and noncovered positions—may offer individuals the opportunity to use the GPO exemption. Officials representing state and local government retirement plans in other states across the country also told us that their plans allow covered and noncovered Social Security positions, making it possible for workers to avoid the GPO by transferring from one type of position to the other. For example: An official in a midwestern state whose plan covers all state government employees, told us that it is possible for law enforcement personnel (noncovered) to take a covered job in the state insurance bureau (covered) just before retiring. In a southern state with a statewide retirement plan for school employees, teachers and other school professionals (noncovered) can potentially transfer to a job in the school cafeteria (covered) to avoid the GPO. A retirement system official from a north central state reported hearing of a few cases where teachers had taken advantage of the exemption by transferring to jobs in other school districts covered by Social Security. Finally, in a western state with a statewide retirement plan, workers could move from one government agency (noncovered) to a position in another agency (covered). The transfers to avoid the GPO we identified in Texas and Georgia could increase long-term benefit payments from the Social Security Trust Fund by about $450 million. We calculated this figure by multiplying the number of last-day cases reported in Texas and Georgia (4,819) by SSA data on the average annual offset amount ($4,800) and the average retirees life expectancy upon receipt of spousal benefits (19.4 years). We believe that these estimated payments would likely increase as use of the exemption grows. Our prior report identified two options for addressing potential abuses of the GPO exemption. The first option, as proposed in H.R. 743, is to change the last-day provision to a longer minimum time period. This option would require only small changes to administer and would be less burdensome than other methods for SSA to administer. Also, this option has precedent. Legislation in 1987 required federal employees transferring between two federal retirement systems, the Civil Service Retirement System (CSRS) and Federal Employees Retirement System (FERS), to remain in FERS for 5 years before they were exempt from the GPO. We found that most of the jobs in Texas last for about 1 day, so extending the time period might eliminate many of the exemption users in Texas. The second option our report identified is to use a proportional approach to determine the extent to which the GPO applies. Under this option, employees who have spent a certain proportion of their working career in a position covered by Social Security could be exempt from the GPO. This option may represent a more calibrated approach to determining benefits for individuals who have made contributions to the Social Security system for an extended period of their working years. However, SSA has noted that using a proportional approach would take time to design and would be administratively burdensome to implement, given the lack of complete and reliable data on noncovered Social Security employment. The GPO “loophole” raises fairness and equity concerns for those receiving a Social Security pension and currently subject to an offset of their spousal Social Security benefits. The exemption allows a select group of individuals with a relatively small investment of work time and only minimal Social Security contributions to gain access to potentially many years of full Social Security spousal benefits. The practice of providing full spousal benefits to individuals who receive government pensions but who made only nominal contributions to the Social Security system also runs counter to the nation’s efforts to address the solvency and sustainability of the Social Security program. Based on the number of people reported to be using the loophole in Texas and Georgia this year, the exemption could cost the Trust Fund hundreds of millions of dollars. While this currently represents a relatively small percentage of the Social Security Trust Fund, costs could increase significantly if the practice grows and begins to be adopted by other states and localities. Considering the potential for abuse of the last-day exemption and the likelihood for its increased use, we believe timely action is needed. Accordingly, our August 2002 report includes a Matter for Congressional consideration that the last-day GPO exemption be revised to provide for a longer minimum time period. This action would provide an immediate “fix” to address possible abuses of the GPO exemption identified in our review. Mr. Chairman, this concludes my prepared statement, I will be happy to respond to any questions you or other members of the Subcommittee may have. For information regarding this testimony, please contact Barbara D. Bovbjerg, Director, Education, Workforce, and Income Security Issues, on (202) 512-7215. Individuals who made key contributions to this testimony include Daniel Bertoni, Patrick DiBattista, Patricia M. Bundy, Jamila L. Jones, Daniel A. Schwimer, Anthony J. Wysocki, and Jill D. Yost.
The Government Pension Offset (GPO) exemption was enacted in 1977 to equalize the treatment of workers covered by Social Security and those with government pensions not covered by Social Security. Congress asked GAO to (1) assess the extent to which individuals retiring from jobs not covered by Social Security may be transferring briefly to covered jobs in order to avoid the GPO, and (2) estimate the impact of such transfers on the Social Security Trust Fund. Because no central data exists on use of the GPO exemption by individuals in approximately 2,300 state and local government retirement plans nationwide, GAO could not definitively confirm that this practice is occuring in states other than Texas and Georgia. In those two states, 4,819 individuals had performed work in Social Security-covered positions for short periods to qualify for the GPO last-day exemption. In Texas, teachers typically worked a single day in nontechnical positions covered by Social Security, such as clerical or janitorial positions. In Georgia, teachers generally agreed to work for approximately 1 year in another teaching position in a school district covered by Social Security. Officials in both states indicated that use of the exemption would likely continue to grow as awareness increases and it becomes part of individuals' retirement planning. For the cases GAO identified, increased long-term benefit payments from the Social Security Trust Fund could be $450 million over the long term and would likely rise further if use of the exemption grows in the states GAO visited and spreads to others. SSA officials acknowledged that use of the exemption might be possible in other state and local government retirement plans that include both those positions covered by Social Security and those not. The GPO "loophole" raises fairness and equity concerns for those receiving a Social Security pension and are currently subject to the spousal benefit offset. In the states we visited, individuals with a relatively minimal investment of work time and Social Security contributions can gain access to potentially many years of full Social Security spousal benefits. The last-day exemption could also have a more significant impact if the practice grows and begins to be adopted by other states and localities. Considering the potential for abuse, our report presented options for revising the GPO exemption, such as changing the last-day provision to a longer minimum time period or using a proportional approach based on the number of working years spent in covered and noncovered employment for determining the extent to which the GPO applies.
In their efforts to modernize their health information systems and share medical information, VA and DOD start from different positions. As shown in table 1, VA has one integrated medical information system—the Veterans Health Information Systems and Technology Architecture (VistA)—which uses all electronic records. All 128 VA medical sites thus have access to all VistA information. (Table 1 also shows, for completeness, VA’s planned modernized system and its associated data repository.) In contrast, DOD has multiple medical information systems (table 2 illustrates certain selected systems). DOD’s various systems are not integrated, and its 138 sites do not necessarily communicate with each other. In addition, not all of DOD’s medical information is electronic: some records are paper-based. For nearly a decade, VA and DOD have been undertaking initiatives to exchange data between their health information systems and create comprehensive electronic records. However, the departments have faced considerable challenges in project planning and management, leading to repeated changes in the focus and target completion dates of the initiatives. As shown in figure 1, the departments’ efforts have involved both long- term initiatives to modernize their health information systems and short- term initiatives to respond to more immediate information-sharing needs. The departments’ first initiative was the Government Computer-Based Patient Record (GCPR) project, which aimed to develop an electronic interface that would allow physicians and other authorized users at VA and DOD health facilities to access data from each other’s health information systems. The interface was expected to compile requested patient information in a virtual record (that is, electronic as opposed to paper) that could be displayed on a user’s computer screen. We reviewed the GCPR project in 2001 and 2002, noting disappointing progress exacerbated in large part by inadequate accountability and poor planning and oversight, which raised questions about the departments’ abilities to achieve a virtual medical record. We determined that the lack of a lead entity, clear mission, and detailed planning to achieve that mission made it difficult to monitor progress, identify project risks, and develop appropriate contingency plans. In both years, we recommended that the departments enhance the project’s overall management and accountability. In particular, we recommended that the departments designate a lead entity and a clear line of authority for the project; create comprehensive and coordinated plans that include an agreed-upon mission and clear goals, objectives, and performance measures; revise the project’s original goals and objectives to align with the current strategy; commit the executive support necessary to adequately manage the project; and ensure that it followed sound project management principles. In response, by July 2002, the two departments had revised their strategy, refocusing the project and dividing it into two initiatives. A short-term initiative, the Federal Health Information Exchange (FHIE), was to enable DOD to electronically transfer service members’ health information to VA when the members left active duty. VA was designated as the lead entity for implementing FHIE, which was completed in 2004. A longer-term initiative was to develop a common health information architecture that would allow a two-way exchange of health information. The common architecture is to include standardized, computable data, communications, security, and high-performance health information systems (these systems, DOD’s Composite Health Care System II and VA’s HealtheVet VistA, were already in development, as shown in the figure). The departments’ modernized systems are to store information (in standardized, computable form) in separate data repositories: DOD’s Clinical Data Repository (CDR) and VA’s Health Data Repository (HDR). The two repositories are to exchange information through an interface named CHDR. In March 2004, the departments began to develop the CHDR interface. They planned to begin implementation by October 2005; however, implementation of the first release of the interface (at one site) occurred in September 2006, almost a year beyond the target date. In a report in June 2004, we identified a number of management weaknesses that could have contributed to this delay and made a number of recommendations, including creation of a comprehensive and coordinated project management plan. The departments agreed with our recommendations and took steps to improve the management of the CHDR initiative, designating a lead entity with final decision-making authority and establishing a project management structure. However, as we noted in subsequent testimony, the initiative did not have a detailed project management plan that described the technical and managerial processes necessary to satisfy project requirements (including a work breakdown structure and schedule for all development, testing, and implementation tasks), as we had recommended. In October 2004, responding to a congressional mandate, the departments established two more short-term initiatives: the Laboratory Data Sharing Interface, aimed at allowing VA and DOD facilities to share laboratory resources, and the Bidirectional Health Information Exchange (BHIE), aimed at giving both departments’ clinicians access to records on shared patients (that is, those who receive care from both departments). As demonstration projects, these initiatives were limited in scope, with the intention of providing interim solutions to the departments’ needs for more immediate health information sharing. However, because BHIE provided access to up-to-date information, the departments’ clinicians expressed strong interest in expanding its use. As a result, the departments began planning to broaden this capability and expand its implementation considerably. Extending BHIE connectivity could provide each department with access to most data in the other’s legacy systems, until such time as the departments’ modernized systems are fully developed and implemented. According to a VA/DOD annual report and program officials, the departments now consider BHIE an interim step in their overall strategy to create a two-way exchange of electronic medical records. The departments’ reported costs for the various sharing initiatives and the modernization of their health information systems through fiscal year 2007 are shown in table 3. Beyond these initiatives, in January 2007, the departments announced a further change to their information-sharing strategy: their intention to jointly develop a new inpatient medical record system. On July 31, 2007, they awarded a contract for a feasibility study. According to the departments, adopting this joint solution is expected to facilitate the seamless transition of active-duty service members to veteran status, and make inpatient health care data on shared patients immediately accessible to both DOD and VA. In addition, the departments believe that a joint development effort could enable them to realize significant cost savings. We have not evaluated the departments’ plans or strategy for this new system. Throughout the history of these initiatives, evaluations besides our own have found deficiencies in the departments’ efforts, especially with regard to the lack of comprehensive planning. For example, a recent presidential task force identified the need for VA and DOD to improve their long-term planning. This task force, reporting on gaps in services provided to returning veterans, noted problems in sharing information on wounded service members, including the inability of VA providers to access paper DOD inpatient health records. The task force stated that although significant progress has been made towards sharing electronic information, more needs to be done, and recommended that VA and DOD continue to identify long-term initiatives and define the scope and elements of a joint inpatient electronic health record. In addition, in fiscal year 2006, Congress did not provide all the funding requested for HealtheVet VistA because it did not consider that the funding had been adequately justified. VA and DOD have made progress in both their long-term and short-term initiatives to share health information. In the long-term project to modernize their health information systems, the departments have begun, among other things, to implement the first release of the interface between their modernized data repositories. The departments have also made progress in their short-term projects to share information in existing systems, having completed two initiatives, and are making important progress on another. In addition, the departments have undertaken ad hoc activities to accelerate the transmission of health information on severely wounded patients from DOD to VA’s four polytrauma centers. However, despite the progress made and the sharing achieved, the tasks remaining to reach the goal of a shared electronic medical record are substantial. In their long-term effort to share health information, VA and DOD have completed the development of their modernized data repositories, agreed on standards for various types of data, and begun to populate the repositories with these data. In addition, they have now implemented the first release of the CHDR interface. According to the departments’ officials, all DOD sites can now access the interface, and it is expected to be available across VA when necessary software updates are released. (Currently 103 of 128 VA sites have received these updates.) At 7 sites, VA and DOD are now exchanging limited medical information for shared patients: specifically, computable outpatient pharmacy and drug allergy information. CHDR is the conduit for exchanging computable medical information between the departments. Data transmitted via the interface are permanently stored in each department’s new data repository, CDR, and HDR. Once in the repositories, these computable data can be used by DOD and VA at all sites through their existing systems. CHDR also provides terminology mediation (translation of one agency’s terminology into the other’s). The departments’ plans call for further developing the capability to exchange computable laboratory results data through the interface during fiscal year 2008. Although implementing this interface is an important accomplishment, the departments are still a long way from completing the modernized health information systems and comprehensive longitudinal health records. While DOD and VA had originally projected completion dates of 2011 and 2012, respectively, for their modernized systems, the departments’ officials told us that there is currently no scheduled completion date for either system. VA is evaluating a proposal that would result in completion of its system in 2015; DOD is evaluating the impact of the new study on a joint inpatient medical record and has not indicated a new completion date. Further, both departments have still to identify the next types of data to be stored in the repositories. The departments will then have to populate the repositories with the standardized data. This involves different tasks for each department. Specifically, while VA’s medical records are already electronic, it must still convert them into the interoperable format appropriate for its repository. DOD, in addition to converting current records from its multiple systems, must also address medical records that are not automated. As pointed out by a recent Army Inspector General’s report, some DOD facilities are having problems with hard copy records. The report also identified inaccurate and incomplete health data as a problem to be addressed. Before the departments can achieve the long- term goal of seamless sharing of medical information, all of these tasks and challenges will have to be addressed. Accordingly, it is essential that the departments develop a comprehensive project plan to guide these efforts to completion, as we have previously recommended. In addition to the long-term effort previously described, the two departments have made some progress in meeting immediate needs to share information in their respective legacy systems through short-term projects which, as mentioned earlier, are in various stages of completion. They have also set up special processes to transfer data from DOD facilities to VA’s polytrauma centers in a further effort to more effectively treat traumatic brain injuries and other especially severe injuries. DOD has been using FHIE to transfer information to VA since 2002. According to DOD officials, 194 million clinical messages on more than 4 million veterans had been transferred to the FHIE data repository as of September 2007, including laboratory results, radiology results, outpatient pharmacy data, allergy information, consultation reports, elements of the standard ambulatory data record, and demographic data. Further, since July 2005, FHIE has been used to transfer pre- and post-deployment health assessment and reassessment data; as of September 2007, VA had access to data for more than 793,000 separated service members and demobilized Reserve and National Guard members who had been deployed. Transfers are done in batches once a month, or weekly for veterans who have been referred to VA treatment facilities. According to a joint VA/DOD report, FHIE has made a significant contribution to the delivery and continuity of care of separated service members as they transition to veteran status, as well as to the adjudication of disability claims. One of the departments’ demonstration projects—the Laboratory Data Sharing Interface (LDSI)—is now fully operational and is deployed when local agencies have a business case for its use and sign an agreement. It requires customization for each locality and is currently deployed at nine locations. LDSI currently supports a variety of chemistry and hematology tests, and, at one of the nine locations, anatomic pathology and microbiology tests. Once LDSI is implemented at a facility, the only nonautomated action needed for a laboratory test is transporting the specimens. If a test is not performed at a VA or DOD doctor’s home facility, the doctor can order the test, the order is transmitted electronically to the appropriate lab (the other department’s facility or in some cases a local commercial lab), and the results are returned electronically. Among the benefits of the LDSI interface, according to VA and DOD, are increased speed in receiving laboratory results and decreased errors from manual entry of orders. The LDSI project manager in San Antonio stated that another benefit of the project is the time saved by eliminating the need to rekey orders at processing labs to input the information into the laboratories’ systems. Additionally, the San Antonio VA facility no longer has to contract out some of its laboratory work to private companies, but instead uses the DOD laboratory. Developed under a second demonstration project, the BHIE interface permits a medical care provider to query selected health information on patients from all VA and DOD sites and to view that data onscreen almost immediately. It not only allows the two departments to view each other’s information, but it also allows DOD sites to see previously inaccessible data at other DOD sites. VA and DOD have been making progress on expanding the BHIE interface. As initially developed, the interface provided access to information in VA’s VistA and DOD’s Composite Health Care System, but it is currently being expanded to query data in other DOD systems and databases. In particular, the interface has been expanded to DOD’s: Modernized data repository, CDR, which has enabled department-wide access to outpatient data for pharmacy and inpatient and outpatient allergy, radiology, chemistry, and hematology data since July 2007, and to microbiology data since September 2007. Clinical Information System (CIS), an inpatient system used by some DOD facilities; the interface enables bidirectional views of discharge summaries and is currently deployed at 13 large DOD sites. Theater Medical Data Store, which became operational in October 2007, enabling access to inpatient and outpatient clinical information from combat theaters. The departments are also taking steps to make more data elements available through BHIE. VA and DOD staff told us that by the end of the first quarter of fiscal year 2008, they plan to add provider notes, procedures, and problem lists. Later in fiscal year 2008, they plan to add vital signs, scanned images and documents, family history, social history, and other history questionnaires. In addition, a VA/DOD demonstration site in El Paso began sharing radiological images between the VA and DOD facilities in September 2007 using the BHIE/FHIE infrastructure. Although VA and DOD are sharing various types of health data, the type of data being shared has been limited and significant work remains to expand the data shared and integrate the various initiatives. Table 4 summarizes the types of health data currently shared via the long- and short-term initiatives we have described, as well as additional types of data that are currently planned for sharing. While this gives some indication of the scale of the tasks involved in sharing medical information, it does not depict the full extent of information that is currently being captured in the health information systems at VA and DOD. In addition to the information technology initiatives described, DOD and VA have set up special procedures to transfer medical information to VA’s four polytrauma centers, which treat active duty service members and veterans severely wounded in combat. Some examples of polytrauma include traumatic brain injury, amputations, and loss of hearing or vision. When service members are seriously injured in a combat theater overseas, they are first treated locally. They are then generally evacuated to Landstuhl Medical Center in Germany, after which they are transferred to a military treatment facility in the United States, usually Walter Reed Army Medical Center in Washington, D.C.; the National Naval Medical Center in Bethesda, Maryland; or Brooke Army Medical Center, at Fort Sam Houston, Texas. From these facilities, service members suffering from polytrauma may be transferred to one of VA’s four polytrauma centers for treatment. At each of these locations, the injured service members will accumulate medical records, in addition to medical records already in existence before they were injured. According to DOD officials, when patients are referred to VA for care, DOD sends copies of medical records documenting treatment provided by the referring DOD facility along with them. The DOD medical information is currently collected in several different systems: 1. In the combat theater, electronic medical information may be collected for a variety of reasons, including routine outpatient care, as well as serious injuries. These data are stored in the Theater Medical Data Store. As mentioned earlier, the BHIE interface to this database became operational in October. 2. At Landstuhl, inpatient medical records are paper-based (except for discharge summaries). The paper records are sent with a patient as the individual is transferred for treatment in the United States. DOD officials told us that the paper record is the official DOD medical record, although AHLTA is used extensively to provide outpatient encounter information for medical records purposes. 3. At the DOD treatment facility (Walter Reed, Bethesda, or Brooke), additional inpatient information is recorded in CIS and outpatient pharmacy and drug information are stored in CDR; other health information continues to be stored in local CHCS databases. When service members are transferred to a VA polytrauma center, VA and DOD have several ad hoc processes in place to electronically transfer the patients’ medical information: DOD has set up secure links to enable a limited number of clinicians at the polytrauma centers to log directly into CIS at Walter Reed and Bethesda Naval Hospital to access patient data. Staff at Walter Reed, Brooke, and Bethesda medical centers collect paper records, print records from CIS, scan all these, and transmit the scanned data to the four polytrauma centers. DOD staff pointed out that this laborious process is feasible only because the number of polytrauma patients is small. According to VA officials, 460 severe traumatic brain injury patients had been treated at the polytrauma centers through fiscal year 2007. According to DOD officials, the medical records for 81 patients planned for transfer or already at a VA polytrauma center were scanned and provided to VA between April 1 and October 11 of this year. Digital radiology images were also provided for 48 patients. Staff at Walter Reed and Bethesda are transmitting radiology images electronically to the four polytrauma centers. Access to radiology images is a high priority for polytrauma center doctors, but like scanning paper records, transmitting these images requires manual intervention: when each image is received at VA, it must be individually uploaded to VistA’s imagery viewing capability. This process would not be practical for large volumes of images. VA has access to outpatient data (via BHIE) from all DOD sites, including Landstuhl. These special efforts to transfer medical information on seriously wounded patients represent important additional steps to facilitate the sharing of information that is vital to providing polytrauma patients with quality health care. In summary, VA and DOD are exchanging health information via their long- and short-term initiatives and continue to expand sharing of medical information via BHIE. However, these exchanges have been limited, and significant work remains to fully achieve the goal of exchanging interoperable, computable data. Work still to be done includes agreeing to standards for the remaining categories of medical information; populating the data repositories with all this information; completing the development of HealtheVet VistA, and AHLTA; and transitioning from the legacy systems. To complete this work and achieve the departments’ ultimate goal of a maintaining a lifelong electronic medical record that will follow service members as they transition from active to veteran status, a comprehensive and coordinated project management plan that defines the technical and managerial processes necessary to satisfy project requirements and to guide their activities continues to be of vital importance. We have previously recommended that the departments develop such a plan and that it include a work breakdown structure and schedule for all development, testing, and implementation tasks. Without such a detailed plan, VA and DOD increase the risk that the long-term project will not deliver the planned capabilities in the time and at the cost expected. Further, it is not clear how all the initiatives we have described today are to be incorporated into an overall strategy toward achieving the departments’ goal of a comprehensive, seamless exchange of health information. This concludes my statement. I would be pleased to respond to any questions that you may have. If you have any questions concerning this testimony, please contact Valerie C. Melvin, Director, Human Capital and Management Information Systems Issues, at (202) 512-6304 or melvinv@gao.gov. Other individuals who made key contributions to this testimony are Barbara Oliver (Assistant Director), Nancy Glover, Glenn Spiegel, and Amos Tevelow. Computer-Based Patient Records: Better Planning and Oversight by VA, DOD, and IHS Would Enhance Health Data Sharing. GAO-01-459. Washington, D.C.: April 30, 2001. Veterans Affairs: Sustained Management Attention Is Key to Achieving Information Technology Results. GAO-02-703. Washington, D.C.: June 12, 2002. Computer-Based Patient Records: Short-Term Progress Made, but Much Work Remains to Achieve a Two-Way Data Exchange Between VA and DOD Health Systems. GAO-04-271T. Washington, D.C.: November 19, 2003. Computer-Based Patient Records: Sound Planning and Project Management Are Needed to Achieve a Two-Way Exchange of VA and DOD Health Data. GAO-04-402T. Washington, D.C.: March 17, 2004. Computer-Based Patient Records: VA and DOD Efforts to Exchange Health Data Could Benefit from Improved Planning and Project Management. GAO-04-687. Washington, D.C.: June 7, 2004. Computer-Based Patient Records: VA and DOD Made Progress, but Much Work Remains to Fully Share Medical Information. GAO-05-1051T. Washington, D.C.: September 28, 2005. Information Technology: VA and DOD Face Challenges in Completing Key Efforts. GAO-06-905T. Washington, D.C.: June 22, 2006. DOD and VA Exchange of Computable Pharmacy Data. GAO-07-554R. Washington, D.C.: April 30, 2007. Information Technology: VA and DOD Are Making Progress in Sharing Medical Information, but Are Far from Comprehensive Electronic Medical Records, GAO-07-852T. Washington, D.C.: May 8, 2007. Information Technology: VA and DOD Are Making Progress in Sharing Medical Information, but Remain Far from Having Comprehensive Electronic Medical Records, GAO-07-1108T. Washington, D.C.: July 18, 2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Veterans Affairs (VA) and the Department of Defense (DOD) are engaged in ongoing efforts to share medical information, which is important in helping to ensure high-quality health care for active-duty military personnel and veterans. These efforts include a long-term program to develop modernized health information systems based on computable data: that is, data in a format that a computer application can act on--for example, to provide alerts to clinicians of drug allergies. In addition, the departments are engaged in short-term initiatives involving existing systems. GAO was asked to testify on the history and current status of the departments' efforts to share health information. To develop this testimony, GAO reviewed its previous work, analyzed documents about current status and future plans and interviewed VA and DOD officials. For almost a decade, VA and DOD have been pursuing ways to share health information and to create comprehensive electronic medical records. However, they have faced considerable challenges in these efforts, leading to repeated changes in the focus of their initiatives and target completion dates. Currently, the two departments are pursuing both long- and short-term initiatives to share health information. Under their long-term initiative, the modern health information systems being developed by each department are to share standardized computable data through an interface between data repositories associated with each system. The repositories have now been developed, and the departments have begun to populate them with limited types of health information. In addition, the interface between the repositories has been implemented at seven VA and DOD sites, allowing computable outpatient pharmacy and drug allergy data to be exchanged. Implementing this interface is a milestone toward the departments' long-term goal, but more remains to be done. Besides extending the current capability throughout VA and DOD, the departments must still agree to standards for the remaining categories of medical information, populate the data repositories with this information, complete the development of the two modernized health information systems, and transition from their existing systems. While pursuing their long-term effort to develop modernized systems, the two departments have also been working to share information in their existing systems. Among various short-term initiatives are a completed effort to allow the one-way transfer of health information from DOD to VA when service members leave the military, as well as ongoing demonstration projects to exchange limited data at selected sites. One of these projects, which builds on the one-way transfer capability, developed an interface between certain existing systems that allows a two-way view of current data on patients receiving care from both departments. VA and DOD are now expanding the sharing of additional medical information by using this interface to link other systems and databases. The departments have also established ad hoc processes to meet the immediate need to provide data on severely wounded service members to VA's polytrauma centers, which specialize in treating such patients. These processes include manual workarounds (such as scanning paper records) that are generally feasible only because the number of polytrauma patients is small. While these multiple initiatives and ad hoc processes have facilitated degrees of data sharing, they nonetheless highlight the need for continued efforts to integrate information systems and automate information exchange. At present, it is not clear how all the initiatives are to be incorporated into an overall strategy focused on achieving the departments' goal of comprehensive, seamless exchange of health information.
CPSC was created in 1972 by the Consumer Product Safety Act to regulate certain consumer products and address those that pose an unreasonable risk of injury; assist consumers in using products safely; and promote research and investigation into product-related deaths, injuries, and illnesses. According to CPSC, this jurisdiction covers thousands of manufacturers and types of consumer products. CPSC does not have jurisdiction over some categories of products, including automobiles and other on-road vehicles, tires, boats, alcohol, tobacco, firearms, food, drugs, cosmetics, medical devices, and pesticides. Other federal agencies—the National Highway Traffic Safety Administration, U.S. Coast Guard, Department of Justice, Department of Agriculture, Food and Drug Administration, and Environmental Protection Agency— have jurisdiction over these products. CPSC has jurisdiction over thousands of types of consumer products and hazardous substances, many of which are subject to mandatory regulations or voluntary standards, or both. Mandatory standards are federal rules set by statute or regulation that define the requirements consumer products must meet. These standards establish performance and labeling criteria that products must meet before they are manufactured, imported, distributed, or sold in the United States. CPSC may set a mandatory standard when it determines that a voluntary standard would not eliminate or adequately reduce a risk of injury or finds that substantial compliance with a voluntary standard would be unlikely. The Commission also may impose a mandatory ban of a hazardous product when it determines that no feasible consumer product safety standard adequately would protect the public from an unreasonable risk of injury. In some cases, Congress has enacted a specific statutory requirement for CPSC to create a mandatory standard, or convert a voluntary standard to a mandatory standard. For instance, CPSIA mandated the conversion of voluntary standards for durable infant and toddler products, all-terrain vehicles, and children’s toys to mandatory standards. Mandatory standards and bans are enforceable by CPSC, allowing the agency to stop imported products that do not meet federal requirements at ports and seek civil or criminal penalties for violations of the mandatory standards or bans. Approximately 200 products are currently regulated and subject to mandatory standards, including automated garage door openers, fireworks, and children’s cribs. Many consumer products under CPSC’s jurisdiction, including smoke alarms, candles, and portable generators, are subject to voluntary standards. More than 700 standards development organizations (SDO) develop most voluntary standards used in the United States, including safety standards. SDOs include private-sector professional and technical organizations, trade associations, and research and testing entities. According to CPSC, three SDOs—Underwriters Laboratories, Inc. (UL); ASTM International; and the American National Standards Institute (ANSI)—coordinate the development of more than 90 percent of voluntary standards developed with CPSC staff technical support. Participants in the standards development process include representatives from government agencies, manufacturers, consumers, retailers, testing laboratories, technical experts, and other interested parties. In general, most SDOs operate by principles that govern the voluntary standards process, such as openness, balance, consideration of views and objections, consensus vote, and the right to appeal. The process of developing consensus standards is designed to be transparent, with written procedures covering each step. Participation in the standard development process is intended to be voluntary. Standards developed by an SDO are considered the property of the SDO. CPSC officials told us that once a standard is published and copyrighted, members of the public and government agencies generally must purchase them. The National Institute of Standards and Technology (NIST), the federal agency that coordinates standard activities, maintains a database of standards that have been incorporated by reference into federal regulations. NIST also has online search tools that members of the public may use to locate other standards—including voluntary standards not incorporated by reference into federal regulations—but according to agency officials, the agency does not collect or maintain voluntary standards. CPSC’s voluntary standards activities are overseen by a Voluntary Standards Coordinator, appointed by the Commission’s Executive Director. The coordinator is the senior agency official responsible for managing the Commission’s voluntary standards program. One of the coordinator’s main duties is to prepare and submit to the Commission a semiannual summary of staff’s voluntary standards activities. Duties also include providing advice and recommendations for the development of new voluntary standards or the revision of existing voluntary standards, in conjunction with CPSC management. The coordinator also proposes policies and guidelines concerning voluntary standards activities, reviews associated public comments, and prepares recommended policies for approval by the Commission. The coordinator works with SDOs, and recommends and trains CPSC staff to serve as technical experts to those organizations. Further, the coordinator is the liaison to industry associations, other government agencies, and any other group interested in voluntary standards. CPSC’s Office of Compliance and Field Operations, currently with 166 staff, has primary responsibility for helping ensure compliance with product safety standards. Its activities include enforcing mandatory standards and reporting requirements, investigating product hazards, and determining corrective actions (such as recalls) for manufacturers not in compliance with safety standards. CPSC also has an Office of Import Surveillance and Inspection that coordinates enforcement efforts with U.S. Customs and Border Protection to help ensure import compliance with safety standards. CPSC has investigators stationed at some ports of entry to assist in surveillance activities. In a past report, we made recommendations to strengthen CPSC’s ability to target unsafe consumer products, especially imported products. We recommended that CPSC work to educate foreign manufacturers about U.S. product safety standards and best practices, including the importance of complying with voluntary standards. CPSC concurred with our recommendation. The 2011-2016 Strategic Plan states that CPSC has been seeking to create and strengthen partnerships with domestic and international stakeholders, including foreign regulators and manufacturers, to improve product safety throughout the supply chain. Also, CPSC’s Office of Education, Global Outreach, and Small Business Ombudsman has separately developed and issued plans for addressing consumer product safety on a country- specific and regional basis. Industry representatives and consumer groups we spoke to said that compliance with voluntary standards developed through the consensus process is generally considered to be high, although they do not track compliance. Some representatives and consumer groups said that it can reach 90 percent for some standards. However, consumer product safety experts suggested that standards for some products have lower compliance, especially commonly low-priced items, products primarily sold over the Internet or by nonconventional retailers, products made by a large number of manufacturers, or products primarily manufactured overseas. For instance, cigarette lighters manufactured overseas and sold at low prices in the United States have been found to be noncompliant with voluntary standards. Consumer product safety experts we spoke to generally said that industry prefers voluntary to mandatory standards. They noted the voluntary standard development process is faster than mandatory rulemaking, and allows the industry a greater level of input. According to CPSC, the time required for mandatory rulemaking varies depending on the complexity of the product or of the rule requirements, the severity of the hazard, and other agency priorities, among other factors. For example, a legal expert told us that a mandatory rulemaking for cigarette lighters took 10 years from the decision to take action to final rule. CPSC also has been considering a mandatory rule to address the risk of fire associated with ignitions of upholstered furniture since 1972. Generally, the flexible process for developing voluntary standards is considered to facilitate revisions to the standards. Working through SDOs, interested parties have been able to revise existing standards to respond in a timely manner to emerging hazards or risks. According to two legal experts, a disadvantage of mandatory standards is that revision or repeal can be difficult. One expert also told us that because mandatory standards set fixed requirements for product safety, the rules can stifle product development and innovation. Industry participants told us that advantages of the voluntary standards process include open participation and proceedings by consensus, which can help ensure compliance with the resulting standards. Other industry representatives said that they also invest considerable time and resources in writing standards, which raises the likelihood of compliance. Factors that affect compliance for some manufacturers include discerning and accessing applicable standards. Some consumer product safety experts told us that some small businesses and foreign manufacturers are not aware of applicable standards for their products. CPSC has responded by extending greater outreach to these businesses through the agency’s Office of Education, Global Outreach, and Small Business Ombudsman. The office coordinates with, and provides education and outreach activities to, various domestic and international stakeholders, including manufacturers, retailers, resellers, small businesses, and foreign governments. Among its responsibilities, the office works with foreign governments and regulatory bodies to help them increase their capacity to develop voluntary and mandatory product safety standards and plans to develop information and guidance tailored specifically to small batch manufacturers. Staff from this office plan to update the CPSC web page to assist small businesses in learning about their obligations under CPSIA, by informing them about voluntary standards, and encouraging them to comply. CPSC also plans to conduct two extended training exchanges with foreign partners, including developing country officials, to increase foreign regulatory agencies’ understanding of CPSC procedures and policies and help ensure that CPSC safety standards are met for U.S.-bound exports. Although not legally mandated for voluntary standards, some retailers require a certification mark or other proof of compliance from manufacturers before they will agree to sell their product in stores. For instance, according to a legal expert, specialty retailers who sell gas fireplaces require proof of adherence to a new standard, which is being revised to address a safety hazard, for glass panels for the front of gas fireplaces. For many products, consumers and retailers expect that they meet a minimum safety standard, such as a voluntary standard. Some retailers conduct their own product safety programs, often certifying compliance with safety standards through testing at third-party labs, to better ensure the safety of products sold in their stores. In addition, some industry associations have programs to certify compliance with voluntary standards applicable to their members’ products. Entities found not to be in compliance with applicable standards could lose the right to bear the association’s certification mark. Industry associations that have certification programs include the furniture industry and children’s products manufacturers. One furniture association provides hang tags to members who have paid to certify their conformance with the industry-developed standards, primarily addressing fire hazards. A group representing children’s products manufacturers has implemented a lab testing and inspection process to certify members’ compliance with applicable standards. Manufacturers contract with the industry group to receive certification that their products, such as cribs, strollers, and baby walkers, comply with standards. Although industry representatives and legal experts we spoke to said that manufacturers largely prefer voluntary over mandatory standards, they also told us that certain industries have sought mandatory standards. Two reasons were cited for an industry’s preference for mandatory standards: first, to level competition across an industry sector, especially where some manufacturers were not complying with the voluntary standard to which the rest of the industry agreed; and second, to preempt divergent state laws. The Lighter Association, a group representing cigarette lighter manufacturers, petitioned CPSC in 2001 to adopt the prevailing voluntary standard for lighters as a mandatory standard. The association cited widespread noncompliance with the voluntary standard, especially for lighters imported from China. Although CPSC has not as yet promulgated a general rule for mechanical requirements for cigarette lighter safety, it had adopted a regulation requiring child-resistant mechanisms for disposable lighters in 1994. A legal expert who has worked with the arts and creative materials industry told us that the industry sought to convert the industry’s voluntary standard, developed with input from consumers and product users, to a mandatory standard to preempt differing laws in at least seven individual states. Potential liability in product liability lawsuits for noncompliance with voluntary standards is another factor that affects compliance. Consumer product safety experts also told us that the risk of incurring reputational and financial costs associated with product liability lawsuits provides an incentive for manufacturers to comply with voluntary standards. Courts generally consider noncompliance with a voluntary standard as relevant evidence to establish a product defect or to prove a case of negligence. By the same token, if litigants can show compliance with applicable voluntary standards, the compliance may provide evidence of lack of a product defect or negligence. However, evidence of compliance usually is not sufficient on its own to negate liability. CPSC cannot compel compliance with voluntary standards. However, according to CPSC officials, the agency has requested that U.S. Customs and Border Protection seize at the ports defective products that are subject to voluntary standards and that constitute a substantial product hazard. CPSC also participates in voluntary standard development activities, although their effectiveness is limited by constrained resources and a restrictive meetings policy. While consumer product safety experts value CPSC’s input, they generally agree that earlier and more active participation could increase CPSC’s efficiency and effectiveness in developing standards. Since voluntary standards do not have the force of law, the Commission cannot compel compliance with them. Noncompliance with a voluntary standard, however, can inform a determination of a substantial product hazard by the CPSC. The CPSA defines a substantial product hazard as a failure to comply with an applicable consumer product safety rule, which creates a substantial risk of injury to the public; or a product defect, which (because of the pattern of defect, the number of defective products distributed in commerce, the severity of the risk, or otherwise) creates a If the CPSC finds that a product substantial risk of injury to the public.presents a substantial product hazard, it can lead to an enforcement action, such as a public notice or recall. Consequences for noncompliance with voluntary standards that amount to a substantial product hazard are discussed in the next section of this report. We found that CPSC does not routinely track broad product compliance with voluntary standards. Although they have internal guidance for monitoring compliance with voluntary standards, CPSC officials said that the agency has not conducted a formal program to test for product conformance with voluntary standards since 2002. The agency cited limited resources and competing priorities, including Congressional mandates and monitoring mandatory standards, as reasons for not doing so. According to CPSC officials, following the enactment of CPSIA in 2008, the agency reallocated resources from voluntary standards activities towards meeting mandatory rulemaking deadlines required in the act. With the enactment of CPSIA in 2008, CPSC was granted expanded legal authority relative to certain voluntary standards under section 15(j) of the Consumer Product Safety Act to create a substantial product hazard list.It allows the Commission to issue a rule for any consumer product or class of products identifying certain characteristics whose presence or absence must be deemed a substantial product hazard. CPSC must determine that the characteristics are readily observable and that the hazard has been addressed by voluntary standards. CPSC must also determine that voluntary standards have been effective in reducing the risk of injury from the products and there is substantial compliance with the voluntary standards. When CPSC publishes a rule making such determinations, the products involved are subject to all of the enforcement consequences that apply to a substantial product hazard. Among other actions, the product must be refused admission into the United States. CPSC works cooperatively with Customs and Border Protection staff at ports of entry to detect and seize defective products. Agency officials stated that, to date, CPSC has twice exercised authority under section 15(j) to identify products containing substantial product hazards: children’s upper outerwear containing drawstrings, because of risk of strangulation; and hand-supported hair dryers without integral immersion protection, due to risk of electric shock. We spoke with legal experts to discuss their views on the CPSC’s expanded authority to declare substantial product hazards. Two legal experts told us that exercising the authority essentially converts a voluntary standard to a mandatory one without undergoing the established rulemaking procedures. According to one expert, the expanded authority gives the CPSC the ability to use the voluntary standards that were intended to address design and performance issues to create a mechanism for seizure of defective products at the ports, without putting the burden of proving a substantial product hazard on the CPSC. Another product safety expert also said that the expanded authority will not substantially enhance CPSC’s enforcement capability because inspectors must have the ability to readily observe the hazard at the port of entry. Some hazards are not readily observable and require testing for compliance, such as lead content. CPSC told us that while the section 15(j) authority allows them to respond more quickly to substantial product hazards, not enough time has passed to assess the effect this authority will have on helping ensure compliance with voluntary standards. CPSC staff participate in the voluntary standard development process by providing expert advice, technical assistance, and information based on data analyses of the numbers of and causes of deaths, injuries, or incidents associated with the product. According to CPSC, it supplies the standard-setting bodies with epidemiological and health science data, including extrapolated injury and death data from hospitals; death certificates associated with products causing the death where available; anecdotal data; and incident reports from SaferProducts.gov.officials said that support of voluntary standards development can be moderate or intensive. They told us that a moderate level of support would include reading the minutes of subcommittee meetings and monitoring the proceedings. More intensive support may consist of conducting and presenting CPSC research, performing lab tests, and writing draft language for the standard. CPSC officials told us that in developing voluntary standards, CPSC interacts primarily with ASTM International for children, juvenile, toddler, and infant products; ANSI for products such as bicycles and garage door operators; and UL for electrical products. CPSC staff told us they have a representative who serves as a nonvoting member on the board at ANSI and on ANSI’s accrediting council. According to ANSI representatives, CPSC staff participate in discussions related to accrediting and maintaining procedures for international standards. Representatives from UL told us that CPSC staff participate in UL’s Consumer Advisory Council, which convenes at least once a year to discuss products and standards. ANSI’s role in standards development differs from that of SDOs. ANSI serves as administrator and coordinator of the U.S. private sector, voluntary standardization system. ANSI also accredits U.S. standards developers using criteria based on international requirements. SDOs accredited by ANSI include ASTM International, UL, and the National Fire Protection Association. support or monitored the development of 60 voluntary safety standards. These standards addressed hazards associated with cradles and bassinets, children’s play yards, portable generators, and garage door openers, among other products. According to CPSC’s Operating Plan, the agency plans to monitor 68 voluntary standards in fiscal year 2012, including standards addressing tip-over hazards of kitchen ranges, cadmium levels in children’s jewelry, strangulation risk posed by window blind cords, and sulfur emissions in drywall (see table 1). CPSC officials told us that voluntary standards monitoring activity decreased substantially after the enactment of CPSIA because of reallocation of resources to meet the act’s requirements. The number of standards selected for monitoring was at a 5-year low in fiscal year 2009; however, the number of voluntary standards selected for monitoring has increased in the past 3 fiscal years and is expected to continue at current levels in the near future. CPSC officials said that staff recommendations based on criteria, such as death and injury data, available resources, and exposure of vulnerable populations to hazards, guide the selection of standards to monitor. They told us that staff consider where participation in voluntary standard setting could help reduce unreasonable risk of injury posed by a product. Management considers and approves or rejects the staff recommendations based on Commission priorities and available resources. Staff approved recommendations are then sent to the Commission for final approval. According to CPSC’s Operating Plan and Performance Budget, the agency plans one recommendation to voluntary standards or revisions to code organizations for fiscal year 2012. The Operating Plan also includes plans for two new data analysis or technical review activities on carbon monoxide alarms and enhanced smoke alarms. Additionally, 10 activities related to nanotechnology in consumer products are planned for fiscal year 2012. These activities will identify the potential release of nanoparticles from selected consumer products and determine the potential health effects from such exposure, which may lead to CPSC participation in voluntary standards development, according to CPSC officials. CPSC officials said that the level of support provided by CPSC to standards development and monitoring is dependent on available resources. One CPSC staff member is assigned to each standard as a project manager responsible for monitoring committee activity and draft revisions. According to CPSC officials, the 68 standards to be monitored in fiscal year 2012 represent the limit the agency can handle given current resource and staff levels. For example, about 25 staff are responsible for monitoring the activities related to these standards. Sixty-eight standards is a small fraction of standards developed for consumer products. For instance, ASTM International has developed more than 12,000 standards while UL maintained more than 1,400 as of 2011. These standards cover many types of products, not exclusively consumer products. CPSC’s relationship with SDOs is outlined in CPSC regulations.policy sets criteria for deciding on CPSC’s involvement in voluntary standards activities. The criteria include: the likelihood the voluntary standard will eliminate or adequately reduce the risk of injury addressed, the likelihood that there will be substantial and timely compliance with the voluntary standard, the likelihood that the voluntary standard will be developed within a reasonable period of time, openness to all interested parties, establishment of procedures to provide for meaningful participation in the development of standards by representatives of a variety of interested parties, and due process procedures. CPSC’s regulation guides the extent and form of CPSC staff involvement in voluntary standards organizations. Staff may attend standards development meetings, take an active part in the discussions, and provide data and explanatory material, but CPSC’s regulation prohibits staff from voting on the standards or from holding leadership positions in standards development committees. Except in extraordinary circumstances and with the approval of the Executive Director, they cannot become involved in standards development meetings that are not open to the public (including members of the media) for attendance and observation. This may include technical subcommittees largely comprised of industry representatives. The regulation also states that active involvement in standards development activity must not be done in a manner that might present an appearance of preferential treatment for one organization or group or put CPSC’s impartiality at risk. CPSC has authority to revise its regulations pertaining to voluntary standards activities. The first regulation concerning involvement in standards development was issued in 1978, and revised in 1989 and again in 2006. According to CPSC, its regulation is similar to the Office of Management and Budget’s (OMB) Circular No. A-119 (Revised), which provides guidance for agencies participating in voluntary consensus standards bodies. However, in our review of CPSC’s regulation, we found the agency interpreted its level of participation more strictly than OMB guidance for such activities as voting on standards and taking leadership positions. CPSC’s rationale for limiting involvement in standards development activity, as described in its regulation, is to maintain its independence—such as not appearing to endorse a specific standard. OMB guidance states that agency representatives should participate actively and on an equal basis with other members, including full involvement in discussions, technical debates, registering of opinions, and if selected, serving in leadership positions. According to OMB guidance, agency representatives may vote at each stage of the standards development process unless prohibited by law or their agencies. A January White House memorandum further outlines principles for federal government engagement in standards activities, especially where statute, regulation, or administration policy identifies a national priority. Specifically, it states that the federal government may need to be actively engaged or play a convening role to accelerate standards development in standard setting and implementation, including supporting leadership positions for federal agency staff in SDO committees. CPSC, consumer groups, and industry officials with whom we spoke generally viewed CPSC’s participation in voluntary standards development activities favorably. Consumer groups and other consumer product safety experts told us that CPSIA has strengthened CPSC’s authority, effectiveness, and level of influence at SDOs. They also told us that the industry now knows that if they do not develop an adequate voluntary standard, CPSC will make a mandatory standard for those products specified by CPSIA. According to consumer representatives who have participated in the process, the dynamic has changed: prior to CPSIA, CPSC’s input was ignored or voted down. With their new authority, CPSC is more active and their input is incorporated a great deal more, resulting in stronger and more protective outcomes, especially for durable goods for infants. Consumer group representatives also told us that CPSC’s involvement in standards development has been effective for helping ensure consumer participation, especially since the passage of CPSIA. In one instance, a consumer group had concerns about the standards development process for window blind cords because of what it thought was a lack of transparency, limited access to information, and lack of consideration of its views, after they were excluded from participating in a technical subcommittee. CPSC appealed directly to industry groups to open the process, and consumer groups eventually were allowed to participate in the window blinds standard development. CPSC officials told us that staff’s effectiveness in standards development partially depends on their own persuasiveness and the direction given from top management. Management recommends and approves staff to participate in standards development activity based on their ability to listen, negotiation skills, analytical proficiency, and level of technical and scientific expertise. Staff also receive training from the Voluntary Standards Coordinator to prepare for SDO meetings. According to CPSC officials, staff selected to participate in standards development activities may seek further advice and training from the Voluntary Standards Coordinator and other colleagues as needed. While consumer product safety experts we spoke to said that CPSC has good working relationships with the SDOs, some added that the agency could take a more active role in standards development activities. Voluntary standard committee participants told us that they value CPSC’s contributions during standards development, one group especially valued its incident data and analysis, and another appreciated the agency’s ability to help ensure an inclusive process. One industry official told us that they work collaboratively with CPSC; for example, they receive data from the CPSC in the process of developing voluntary standards for particular products. In one case, CPSC had identified, through its incident data, a laceration hazard resulting from a certain design of high chair with two hooks on the back. CPSC communicated this information to industry representatives, and it was incorporated into the voluntary standard process for the product. Another industry stakeholder told us that CPSC is viewed as a valuable partner in stronger standard development. By simply being present at voluntary standards development meetings, CPSC shows the industry that it is monitoring their activities. Other consumer product safety experts said that CPSC’s participation in committees could be more active and its position on the draft standards better articulated. Because of limitations stemming from CPSC’s regulation governing staff participation in standards development activity, the resulting standard may not fully reflect the CPSC staff input and the standard development process can be delayed. According to some consumer product safety experts, CPSC staff are restrained and act largely as observers at standard development committee meetings. Others said that, at times, CPSC staff does not challenge the adequacy of the standards. For example, although CPSC converted the voluntary standard for all-terrain vehicles to a mandatory standard in 2009, as required by CPSIA, in the view of some experts, all-terrain vehicles remain covered by a weak standard. In public statements regarding the all-terrain vehicle standard, one CPSC commissioner said that the recent update to the standard, while not diminishing the safety of the product, remains a low threshold for federal safety standards. Our analysis of CPSC public recall notices showed that there have been 36 recalls of all- terrain vehicles involving 15 companies for fiscal years 2007 through 2011. Manufacturers have recalled all-terrain vehicles for reasons such as a risk of a crash caused by pieces of the main suspension breaking off and a risk of loss of vehicle control due to faulty speed controls. Recall notices do not indicate if the hazards posed by the product are covered by voluntary or mandatory standards. In discussions with consumer product safety experts, they said that if CPSC challenged the adequacy of the standards more frequently this would send a signal to industry that the agency was committed to obtaining a high level of safety in voluntary standards. Some industry representatives emphasized that they wanted CPSC’s more active and earlier participation in standards development. They said that they would benefit from more information about CPSC’s views on specific provisions of a standard, such as certain performance requirements, level of risk tolerance, or aspects of a product CPSC wanted changed. Some industry representatives said that if the agency’s position on a standard were more apparent from the outset, the process would be faster and more efficient, which could result in stronger standards. One industry representative also noted that more active and earlier participation would allow CPSC to consider unforeseen business consequences of their proposed revisions to standards earlier in the process. For instance, according to this industry representative a revised standard for child bed rails was delayed by CPSC proposing costly revisions after the standard had already been approved by SDO participants. Manufacturers can face consequences ranging from civil monetary penalties to the reputational and financial losses associated with corrective action if their products fail to comply with voluntary standards and if they present a substantial product hazard. Corrective actions include recalls, which encompass refunds, replacements, or repairs. CPSC may also sue to prevent distribution or sale of a product pending completion of a recall proceeding. Although voluntary standards do not have the force of law, manufacturers are legally required to report substantial product hazards to CPSC. Every manufacturer of a consumer product must inform the Commission if they obtain information that reasonably supports the conclusion that the product contains a defect that could create a substantial product hazard. Such a report may include information the manufacturer obtained about a product outside the United States if it is relevant to products sold or distributed in the Manufacturers that knowingly fail to report potential United States.substantial product hazards could be subject to civil or criminal penalties. In 2011, CPSC negotiated out-of-court settlements in which five companies agreed to pay $3.26 million in civil penalties related to their failure to report substantial product hazards to the agency. Although failure to meet a voluntary standard alone is not sufficient for CPSC to take action against a company—because voluntary standards are not enforceable by law—CPSC’s analysis of the evidence of noncompliance and determination that the product could pose a substantial product hazard can lead to corrective action. According to CPSC’s interpretive regulations, compliance or noncompliance with applicable voluntary standards may be a factor in determining whether a substantial product hazard exists. To determine if corrective action is needed, CPSC staff review incident reports on a daily basis and forward them to appropriate integrated teams for extensive analysis. CPSC integrated teams comprise subject matter experts such as engineers, human factors experts, health scientists, statisticians from the Office of Hazard Identification and Reduction, and compliance officers from the Office of Compliance. The teams then assess the reports for hazard type, whether the incident affected vulnerable populations, and the severity of injury. CPSC also collects data on injuries and deaths for products under its jurisdiction, and staff conduct investigations on specific injury cases to gain better knowledge of how the product was involved. Based on analysis of these data, the integrated teams decide if further action would be warranted, such as additional monitoring of the situation, an in-depth investigation, or a product safety assessment. In our discussions with CPSC officials, they told us that the agency decides on further actions based on other agency priorities, resources, and the level of risk that a product poses. Once CPSC has identified a hazardous product, the agency will take action to remove the product from the market. If a recall is necessary, CPSC staff negotiates with the responsible company to seek a voluntary recall, if appropriate. Manufacturers that report product defects propose a remedy that must be deemed acceptable to CPSC staff. This often involves the product’s recall, which consists of the purchase price refund, repair, or replacement of the product. CPSC considers whether the plan adequately addresses the risk of injury presented by the product. For example, if the manufacturer’s proposed solution was to repair its product, CPSC engineers would test the repair to determine if it addressed the hazard adequately. Similarly, if the proposed solution was a refund, CPSC officials would evaluate the refund process to determine if it would cause undue burden to the consumer. CPSC takes steps to ensure that recalled products are not reintroduced in the market through second-hand stores by monitoring the internet and through market surveillance programs. Table 2 contains information about CPSC’s recall activities for fiscal years 2007 through 2011 for products covered by mandatory standards and those that are unregulated. In our review of CPSC documents, the agency focused much of its surveillance and compliance work on imported products. According to CPSC, approximately 80 percent of recalls from 2008 through 2011 have been of imported products. The agency’s Office of Import Surveillance and Inspection has primary responsibility for product surveillance at ports of entry in cooperation with other appropriate federal agencies. U.S. Customs and Border Protection notifies CPSC and other regulatory agencies with import safety responsibilities of the arrival of imported products and provides information about those products. CPSC identifies potentially unsafe products and requests that U.S. Customs and Border Protection set them aside for CPSC examination. Once samples are delivered to or taken by CPSC for examination, CPSC may detain the shipment pending further examination and testing, conditionally release the shipment to the importer’s premises pending examination and testing, or release the shipment to the importer outright. Compliance investigators examine the sample to determine whether it (1) complies with the relevant mandatory standard or standards; (2) is accompanied by a certification of compliance with relevant product safety standard that is supported by testing, in some cases by a third party, (3) is or has been determined to be an imminently hazardous product; (4) has a product defect that presents a substantial product hazard; or (5) is produced by a manufacturer who failed to comply with CPSC inspection and According to a CPSC notice, from October recordkeeping requirements.1, 2011, to December 1, 2011, officials identified about 240 noncompliant products at ports of entry, including defective hair dryers, lamps, and holiday lights. Table 3 illustrates standards activities and recall actions for selected products for fiscal years 2007 through 2011. Some products are covered by both mandatory and voluntary standards, which may address different aspects of the product features. For example, all-terrain vehicles, cribs, infant bath seats, infant walkers, and cigarette lighters are subject to both mandatory and voluntary standards. CPSC has no tracking mechanism specific to voluntary standards in its compliance database, but the agency can identify patterns of noncompliance and address safety hazards. CPSC tracks reports of noncompliance with mandatory standards and identifies potential product hazards. CPSC has two internal databases for tracking noncompliance one for regulated products (products subject to mandatory standards) and the other for products that could pose a substantial product hazard (either unregulated products or products subject to voluntary standards). In discussions with CPSC officials, they told us that there is no field in the databases to indicate whether a product is covered by one of the thousands of existing voluntary standards. However, they noted that they have internal policies for tracking compliance with voluntary standards. According to agency officials, CPSC’s policy states that when staff has determined that noncompliance with voluntary standards amounting to a substantial product hazard has occurred, staff should create a file with a case number to track this issue. The case number is an internal tracking number that does not correspond to the unique identifier assigned by SDOs and cannot be linked to a voluntary standard. However, agency officials said that if CPSC finds that the product poses a substantial product hazard and staff determine that the voluntary standard is inadequate or that no standard exists, they refer the case to the voluntary standards coordinator to address through standard setting activities. CPSC’s compliance databases for both the regulated products and substantial hazard (section 15) products are case management systems. According to CPSC officials, more than 50,000 distinct firm names are in the databases. CPSC can classify incidents by manufacturer, retailer, distributer, and country of origin. In some cases a foreign company may have a U.S. agent or representative, making it difficult for CPSC’s database to discern whether the reporting company is foreign or domestic. In addition, CPSC assigns more than 800 different product and product category codes to help track case files. CPSC’s case files track information about the firm, the product, the type of noncompliance, and other relevant information. The agency also tracks correspondence with the manufacturer, distributor, retailers, and public about the case, as well as the corrective action implemented to address the noncompliance. Agency officials said that the databases have the capacity to track 26 hazards in 8 hazard categories, including fire hazards for fabrics, materials, and electric appliances; mechanical hazards in children’s, household, and sports and recreation products (involving choking, strangulation, and other injury hazards); electrocution; and chemical hazards. Staff use data from the compliance databases to identify types of product defects such as those associated with design, construction, and packaging of a product, or absence of warning labels or instructions. They also track the number of defective products in the market and assess the severity of risk of defects and likelihood of injury. In addition to tracking trends from compliance data, agency staff, including the Hazard Analysis Division, perform a range of statistical analyses across reported incident data to identify patterns of noncompliance. CPSC staff said they analyze compliance in terms of the product rather than the manufacturer. Incident data comes from various sources, including retailers; manufacturers; public safety professionals; health care professionals; death certificates; news reports; state and local governments; and incident reports submitted by consumers through CPSC’s website, SaferProducts.gov. CPSC staff identify potential emerging patterns, produce estimates of injuries and quantify the frequency of fatalities based on emergency room data, test for injury trends over time, and characterize hazard patterns. Analysts evaluate these data on a daily basis and report increased frequency of reports for a given product or manufacturer to appropriate teams in the agency. Officials said that on a weekly basis, analysts apply algorithms across reports to characterize the frequency by product code. They generate statistics comparing the number of reports received in the week for particular products to the number received for the same product over a 20-week period. CPSC officials then use the data to determine which incidents should be investigated and report on their findings to internal teams. For example CPSC has identified instances of appliance tipovers and issued press releases with information to consumers to raise awareness of tipover hazards. To address this, the agency plans to participate in standard revision activity to address kitchen range tipovers. In a previous report we addressed CPSC work resulting from identification of certain hazard patterns. We reported that during the 1980s, the data CPSC collected on injuries and fatalities related to all- terrain vehicles, especially among children, led it to file a lawsuit alleging that the vehicles were an imminently hazardous product. CPSC and manufacturers eventually settled the lawsuit through a consent decree in which manufacturers and distributors agreed to implement certain safety measures and stop selling certain vehicles considered dangerous for young children. In its fiscal year 2012 Operating Plan and Performance Budget, CPSC also noted that it plans to update safety publications addressing children’s hazards, fire and electrical hazards, mechanical hazards, sports and recreational hazards, and chemical and combustion hazards. Voluntary standards establish safety guidelines for many of the thousands of consumer products in CPSC’s jurisdiction. CPSC is required by law to rely on these standards, developed through consensus by industry, consumer, and government participants, when the standards are adequate to address the risk of harm and substantial compliance with them is likely. Because of the substantial prevalence of voluntary standards for consumer products, CPSC’s early and active participation in standards development activity is critical to establishing adequacy of the standard. If CPSC finds that a manufacturer does not comply with a voluntary standard and it creates a substantial product hazard, the agency can seek a corrective action, such as a recall; however, CPSC does not have the authority to compel compliance with voluntary standards as such. For fiscal years 2008 through 2011, 80 percent of recalls have been of imported products that may be subject to voluntary standards, highlighting challenges CPSC faces in helping to ensure the safety of consumer products. CPSC has taken steps to ensure compliance by (1) performing industry surveillance through analysis of incident and other data, (2) participating in standards development activities, and (3) monitoring selected voluntary standards. Although CPSC regularly participates in standard development activity to the extent possible, consumer product safety experts we spoke to generally agreed that earlier and more active CPSC participation could increase its efficiency and effectiveness in developing standards. Our review also found that CPSC regulations concerning meetings policies and allowable conduct for CPSC staff participating in standards development activity are generally more restrictive than the existing general government policy on such participation. While OMB guidance gives agencies discretion to determine their level of participation in standard setting activities, CPSC has chosen to limit participation to maintain impartiality and avoid appearance of endorsing a specific voluntary standard. Further, a recent White House memorandum on national standards policy states that where statute, regulation, or administration policy identifies a national priority, the federal government may need to be actively engaged or play a convening role to accelerate standards development and implementation. Changing regulations to enable staff to more actively participate, especially when working with technical committees for which CPSC has expertise and permitting CPSC staff to vote on the standard, could result in stronger voluntary standards without compromising CPSC’s independence. Without more active participation from CPSC, standards emerging from standards development organizations risk being less stringent and may be inadequate to protect the public from hazards. To strengthen the adequacy of voluntary standards, we recommend that the Chairman of CPSC direct agency staff to review the policy for participating in voluntary standards development activities and determine the feasibility of assuming a more active, engaged role in developing voluntary standards. We provided a draft of this report to CPSC for comment. In their written comments, reproduced in appendix II, CPSC supported our recommendation and wrote that staff would review agency policies and determine the feasibility of changes to the policies. CPSC staff also provided technical comments that were incorporated, as appropriate. We are sending copies of this report to interested congressional committees and the Chairman and commissioners of CPSC. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To evaluate the extent to which manufacturers comply with voluntary standards for consumer products, we interviewed officials from the Consumer Product Safety Commission (CPSC) and national consumer, industry, standard-setting, and legal organizations that have expertise in working on voluntary standards development for consumer products. We reviewed internal CPSC operating procedures and learned about the agency’s outreach programs to educate the public about safety standards. We reviewed statutory authorities and procedures for establishing voluntary standards. We interviewed the three standards development organizations that coordinate the development of more than 90 percent of voluntary standards developed with CPSC staff technical support to learn about how standards and certification programs are developed. To evaluate CPSC’s authority and ability to encourage compliance with voluntary standards, we reviewed CPSC’s statutory and regulatory authority related to voluntary standards. We also reviewed CPSC standard operating procedures, performance and accountability reports, and budget documents to obtain information about CPSC’s work plans with respect to voluntary standards. We met with cognizant CPSC officials, including all of CPSC’s current commissioners and the Chairman, to discuss their authorities and ability to enforce them. We reviewed relevant laws, regulations, and our prior reports on CPSC’s authorities. We interviewed legal experts in the consumer product safety field regarding CPSC’s authorities. We conducted a literature search for information regarding CPSC’s effectiveness in getting manufacturers to comply with voluntary standards. We attended a conference on the adequacy of voluntary standards sponsored by the Consumer Federation of America and a conference by the International Consumer Product Safety and Health Organization on trends in international consumer product safety. To evaluate the consequences for manufacturers that fail to comply with voluntary standards, we reviewed documents from CPSC officials and obtained and reviewed publicly available data on recalls and other corrective actions. We obtained and analyzed data collected by CPSC through SaferProducts.gov regarding product safety incident reports and corrective actions assigned to manufacturers whose products did not comply with voluntary standards. We assessed the reliability of these data by (1) reviewing existing information about the data and the system that produced them and (2) interviewing agency officials knowledgeable about the data and related management controls. We found the data to be reliable for the purposes of determining the number and trends of product safety incident reports and corrective actions. We interviewed CPSC officials, legal experts, and consumer and industry participants to learn of possible corrective actions that could be imposed on firms that fail to comply with voluntary standards. Further, we conducted a legal literature search for information about CPSC’s authorities to determine consequences for manufacturers who fail to comply with voluntary standards. To evaluate CPSC’s efforts to identify patterns of noncompliance with voluntary standards, we interviewed CPSC officials about their data collection methods and internal processes for analyzing incident data and tracking patterns. We obtained and reviewed data from CPSC’s compliance databases to identify (1) the number of reported instances of noncompliance; (2) the number of times these instances led to a corrective action; (3) the numbers of corrective actions that resulted; (4) the number of product units recalled; and (5) the type of standard, if any, that covered the product. We assessed the reliability of these data by (1) reviewing existing information about the data and the system that produced them and (2) interviewing agency officials knowledgeable about the data and related management controls. Based on this assessment, we determined the data to be sufficiently reliable for the purposes of this report. We conducted this performance audit from January 2012 to May 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Debra Johnson, Assistant Director; Nina E. Horowitz; DuEwa Kamara; Angela Messenger; Barbara Roesmann; Jessica Sandler; Andrew Stavisky; and Henry Wray made major contributions to this report.
Growing numbers of recalls in 2007 and 2008, particularly of children’s products, focused increased attention on CPSC. Consumer products can be subject to mandatory or voluntary standards, or both. Questions have been raised about the level of compliance with voluntary standards and CPSC’s ability to encourage compliance. The Consolidated Appropriations Act of 2012 directed GAO to analyze manufacturers’ compliance with voluntary industry standards. This report evaluates (1) what is known about the extent to which manufacturers comply with voluntary standards for consumer products, (2) CPSC’s authority and ability to require compliance with voluntary standards, and (3) the consequences for manufacturers that fail to comply with voluntary standards. To do this, GAO reviewed CPSC’s statutory and regulatory authorities to encourage compliance with voluntary standards; reviewed agency documents and literature on consumer product safety; analyzed data on CPSC corrective actions; and met with representatives from national consumer, industry, legal, and standard-setting organizations who have expertise in developing consumer product safety standards. Although the Consumer Product Safety Commission (CPSC) enforces compliance with mandatory federal safety standards, it is also required by law to rely on voluntary safety standards when it determines that the standard adequately addresses the product hazard and is likely to have substantial compliance. Voluntary standards—developed by industry, consumer, and government participants through a consensus process—cover many of the thousands of types of products in CPSC’s jurisdiction. Compliance with voluntary standards is not routinely tracked, but it is generally considered to be high by industry participants. Compliance with these standards also depends on industry and legal factors, such as retailer requirements to demonstrate proof of compliance with voluntary safety standards and risk of liability in product liability lawsuits. Because voluntary standards do not have the force of law, CPSC cannot compel compliance with them. However, noncompliance with a voluntary standard can inform a determination of a substantial product hazard by the CPSC that in turn can lead to CPSC enforcement actions. CPSC has exercised its expanded authority to place a product on the substantial product hazards list. Specifically, it designated drawstrings from children’s upper outerwear and hair dryers without a ground fault circuit interrupter as hazardous products, and Customs has seized violative items at ports. CPSC also participates in standard development activities with industry and consumer representatives and monitors select voluntary standards. CPSC attends standard development meetings, supplies hazard and injury data and analysis, and provides input on draft standards. However, CPSC’s regulation prohibits staff from voting on the final standards or from participating in any meeting that excludes other groups, such as media or consumers. CPSC’s rationale for limiting involvement in standards development activity is to maintain its independence—such as not appearing to endorse a specific standard. Office of Management and Budget guidance gives agencies discretion to determine their level of participation in standard setting activities, including full involvement in discussions, serving in leadership positions, and voting on standards. A January 2012 White House memorandum states that the federal government may need to be actively engaged in standards development and implementation, including playing an active role in standard setting and assuming leadership positions in Standard Development Organization committees. Committee participants GAO spoke to value CPSC’s input but generally agreed that CPSC should participate earlier and take a more active role in standards development. These actions could enhance CPSC’s oversight, and may strengthen voluntary standards. Manufacturers that fail to comply with voluntary standards can face consequences when CPSC has determined that noncompliance poses a significant risk of injury or death to consumers. CPSC can take corrective action against the manufacturer, including recalls, or take longer term action to ban the hazardous product. CPSC has focused much of its surveillance and compliance work on imported products. For fiscal years 2008 through 2011, 80 percent of CPSC recalls have been of imported products that may be subject to voluntary standards, highlighting challenges CPSC faces in helping to ensure the safety of consumer products. To strengthen the adequacy of voluntary standards, CPSC should review the policy for participating in voluntary standards development activities and determine the feasibility of assuming a more active, engaged role in developing voluntary standards. CPSC supported the recommendation.
In almost every year an influenza virus causes acute respiratory disease in epidemic proportions somewhere in the world. Influenza—also called “the flu”—is more severe than some of the other viral respiratory infections, such as the common cold. Most people who get the flu recover completely in 1 to 2 weeks, but some develop serious and potentially life-threatening medical complications, such as pneumonia. People who are age 65 or over or who have severe chronic conditions are much more likely to develop serious complications than are younger, healthier people. In an average flu season (winter months), influenza contributes to as many as 20,000 deaths and 114,000 hospitalizations in the United States. Occasionally, worldwide influenza epidemics—called pandemics—occur that can have successive “waves” of disease and last for up to 3 years. Documented accounts of such pandemics cover the past 300 years, with three occurring in the 20th century. Notable among these was the pandemic of 1918—called the “Spanish flu”—which killed at least 20 million people worldwide, including 500,000 in the United States. For reasons still not completely understood, many of the fatalities during the 1918 pandemic were young adults, and many people reportedly died within hours after the first symptoms appeared. The pandemics of 1957 (“Asian flu”) and 1968 (“Hong Kong flu”) caused dramatically fewer fatalities—70,000 and 34,000, respectively, in the United States—primarily because of antibiotic treatment of secondary infections and more aggressive supportive care.Nevertheless, both were associated with high fatality rates and social disruption resulting from high absenteeism among providers of health care and other essential community services such as police and firefighters. The characteristics of influenza viruses make the disease difficult to control, and its eradication is not a realistic expectation. Influenza viruses undergo minor but continuous genetic changes from year to year.Periodically, but unpredictably, an influenza virus changes so significantly that any immunity conferred by previous vaccinations or infections is not effective, creating the potential for a pandemic. The dramatic genetic changes that produce variants responsible for widespread illness and death, such as those that caused the 1957 and 1968 pandemics, probably involve the mixing of two strains in a single host. For example, strains of the influenza virus that are found in birds can mix with strains found in other host animals, such as pigs, to produce a new, and possibly virulent, strain that infects people. In 1997 a second—never before seen—method for dramatic change was revealed when an avian influenza virus not previously known to infect people directly infected humans without an intermediate host. The virus killed 6 of the 18 people in Hong Kong who became ill. Although the disease did not readily spread among humans, had it acquired the ability to do so, it might have become very difficult to control. Because new influenza viruses will continue to emerge, many experts believe another pandemic is inevitable. Public health experts and state and federal officials view influenza vaccine as the cornerstone of efforts to prevent and control annual epidemic influenza as well as pandemic influenza. Deciding which viral strains to include in the annual influenza vaccine depends on data collected from domestic and international surveillance systems that identify prevalent strains and characterize their effect on human health. In the United States, CDC monitors data on the disease and the virus from surveillance that occurs in all 50 states and the District of Columbia year-round but with intensified efforts during the October through May flu season. Domestic surveillance consists of test data from 138 laboratories that receive specimens year-round, mortality data from 122 cities that account for about one-third of all deaths, and weekly reports from about 400 physicians and state epidemiologists regarding the extent and intensity of influenza illness. In addition, CDC participates in international disease and laboratory surveillance sponsored by the World Health Organization (WHO), which operates in 83 countries. Officials at HHS, WHO, and state public health agencies have begun to develop strategies to reduce influenza-related illness, death, economic loss, and social disruption, such as the closure of schools and hospitals and decreased access to utilities and other essential services. In many cases, state and federal officials are integrating these strategies with response plans for such public health emergencies as natural disasters and bioterrorist events. However, unlike many natural disasters, which often have fairly localized effects, an influenza pandemic is likely to affect many locations simultaneously. This widespread nature may preclude the ability to shift human and material resources from unaffected areas to locations in great need, a possibility that heightens the importance of planning during the prepandemic period. Vaccines are considered the first line of defense against influenza to prevent infection and control the spread of the disease. The ability to successfully use vaccines to prevent influenza-related illness and death during the first wave of a pandemic, however, relies on certain conditions that have not been realized in the past, and may not occur in the future. Problems experienced in past influenza pandemics include the inability to produce a sufficient quantity of vaccine before outbreaks occur in the United States and variations in the extent to which the manufactured vaccine is effective in preventing illness among various sectors of the vaccinated population. Annual influenza vaccine production is a complex process involving vaccine manufacturers, health care experts, and federal agencies, primarily the FDA. The process, which involves growing the virus for vaccine in fertilized chicken eggs, requires several steps, generally taking at least 6 to 8 months between about January and August each year, as shown in table 1. Administering the vaccine to the population is estimated to take an additional 1 to 2 months, or even longer if a second dose of vaccine is required. After inoculation, it takes about 2 weeks for adults and up to 6 weeks for children to achieve optimal protection under a one-dose regimen, with an additional 4 weeks if a booster shot is needed a month later. Annual production capacity for vaccine is about 80 million doses per year, which FDA officials and vaccine manufacturers agree can be expanded to produce vaccine for the entire U.S. population under certain conditions.However, these conditions were not realized during the pandemics of 1957 and 1968, when immunization efforts failed to have any perceptible effect because too little vaccine was administered too late. HHS officials and vaccine manufacturers agree that because of the complexity of the vaccine production cycle, problems are also likely to occur in a future pandemic. Several factors can hinder timely vaccine production, including (1) the speed of production compared to the speed at which the virus infects a population, and (2) how well the virus can be replicated for mass production. While the global influenza surveillance system provides valuable information for deciding which viral strains to include in the annual influenza vaccine, limits on the speed with which vaccines can be produced may hinder pandemic response capability. Because people lack immunity to a pandemic strain and such a virus may be more virulent, pandemic strains may spread more quickly. Experts involved in monitoring the identification and spread of influenza viruses estimate that a pandemic strain originating in a foreign country could arrive in the United States sooner than vaccine could be produced. FDA officials and vaccine manufacturers told us that production of influenza vaccine cannot be shortened to less than the current 6 to 8 months given the existing technology and safety standards. However, as table 2 shows, past pandemics and new strains that might have heralded a pandemic have generally spread to the United States in less time. NIH is developing a library of reagents of all strains known to circulate among animals that has the potential to shorten the time required to identify a new virus.More rapid identification could help reduce the time needed to produce an effective vaccine should these strains appear in humans. However, more rapid production would not ensure that sufficient vaccine would be available before the first wave of influenza outbreak occurs, especially if the pandemic originates in the United States. Even assuming that the next pandemic originates outside of the United States, experts estimate the warning time prior to reaching U.S. soil may range from about 1 to 6 months. NIHand others are sponsoring research to develop new types of vaccines, but an all-purpose vaccine effective against a broad spectrum of influenza strains that could be produced in advance of a pandemic has not materialized. The inflexibility of the vaccine production cycle also could contribute to delays in the availability of an influenza vaccine. To help ensure that vaccines are ready to be distributed in time for the flu season each fall, annual influenza vaccine production in the United States routinely occurs earlier in the year, from January through August.Because no market exists for vaccine after this period, manufacturers switch their capacity to other uses between about mid-August and December. This annual vaccine production cycle may not coincide with the timing needed to respond to an outbreak of a new influenza strain. For example, in July 1997, public health officials at CDC determined on the basis of surveillance data from Australia that a new influenza strain was circulating and would be likely to cause widespread illness in the United States during the upcoming flu season. But by July, vaccine production was almost complete, and the new strain could not be added. As a result, the vaccine for the 1997-98 flu season in the United States was, according to CDC reports, less effective in preventing influenza illness than in previous years. As table 3 shows, other pandemic and newly detected virus strains have also been identified after the annual vaccine production cycle had begun. Manufacturers say they are willing to maintain year-round production capacity should the government wish to fund the necessary costs of maintaining unused capacity during nonpandemic periods. To date, HHS has not developed contingency plans for expanded capacity or analyzed whether government funding to maintain ongoing manufacturer capacity is feasible or desirable. Such an analysis would need to consider other potential production problems that may further preclude vaccine availability. One potential production problem is that influenza strains differ in how well they can be mass-produced for vaccine, which may negatively affect the quantity of vaccine that can be produced in a given year. To create a vaccine, manufacturers first receive the reference strain of virus from FDA. This reference, or “seed,” virus is generally made up of bits of the selected influenza virus that have been combined with another influenza virus that grows more quickly. Manufacturers then mass-produce this “high-growth- reassorted” virus in fertilized chicken eggs and harvest it to make the vaccine. Problems have occurred when a particular virus strain either cannot be grown in eggs or grows too slowly. For example, the strain identified in Hong Kong in 1997 was an avian strain that killed chick embryos, a factor that complicated U.S. production of a vaccine. More recently, difficulties replicating and processing one strain included in the vaccine for the 2000-01 influenza season have contributed to lower-than- anticipated production yields and delays in distributing vaccine supplies. To address this problem, manufacturers and others are studying the feasibility of switching from an egg-based to a tissue-based production method, but the latter method has not been licensed by FDA and the overall benefits are not clear. For example, while some avian strains of influenza may grow more readily in tissue than in chicken eggs, others may not. Alternative attempts to grow the 1997 Hong Kong virus in cell substrates other than eggs were, in some cases, more successful than egg-based methods, but difficulties still hindered mass vaccine production. Some manufacturers told us that the cost of switching production methods may not be worth the investment because tissue-based production may result in lower yields of vaccine. For example, one manufacturer said that growing the virus in tissue takes approximately 5 days, while growing the virus in eggs takes 11 days, saving less than 1 week in the total production cycle. New technology based on a DNA vaccine may resolve these production problems, while reducing production time. However, researchers estimate it will be at least 5 to 10 years before this technology is available for vaccine production. Vaccinating the entire U.S. population does not guarantee everyone will be protected from influenza-related illness and death. Information regarding the extent to which vaccines have been effective in preventing influenza is limited, but available studies indicate vaccine effectiveness may vary significantly from year to year based on both vaccine-related factors and the demographics of the population receiving the vaccine. For example, vaccine preparation, dosage, and the degree to which the vaccine matches the virus circulating in the community all affect vaccine effectiveness. Demographic factors that influence how well each person’s immune system responds to the vaccine generally include the person’s age and extent of underlying chronic illness or disease. Although up to about 80 million doses of vaccine are administered each year, no regular program exists to determine how effectively the vaccine performs. While HHS officials told us they see some effect from vaccination coverage, other experts point to national data trends that have not shown a clear correlation between changes in influenza-related illness and death relative to changes in the proportion of the population vaccinated.Using data sets from managed care organizations, CDC intends to continue retrospective studies of vaccine effectiveness to better determine how well vaccine prevents influenza or mitigates its severity in various populations. In the meantime, information on vaccine effectiveness is generally limited to small studies of primarily vaccinated populations. These studies have shown that when the vaccine generates a good antibody response to the circulating virus, influenza vaccine may prevent illness in approximately 70 to 90 percent of healthy persons under 65 years of age. However, vaccine effectiveness drops sharply for the elderly and people with chronic illness, who are considered most vulnerable to influenza-related illness and death.For example, studies have shown influenza vaccine may be about 30 to 70 percent effective in reducing hospitalization among the noninstitutionalized elderly population. Overall effectiveness in preventing influenza among the elderly has been even lower, often ranging from 30 to 40 percent. Approaches to improve the effectiveness of the influenza vaccines include conducting research to develop alternative methods of administering existing vaccines and new vaccines such as weakened live virus vaccine or DNA vaccines that, in theory, may produce broader and longer-lasting protective immune responses. Antiviral drugs and vaccine against pneumonia are two additional measures that can help prevent or mitigate influenza-related illness and death until an influenza vaccine becomes available. However, both are expected to be in short supply during a pandemic, and increasing production capacity for antiviral drugs and vaccine in response to increased demand could take at least 6 to 9 months. Creating a stockpile of antiviral drugs is an option to mitigate shortages during a pandemic. However, HHS officials told us that additional analysis is needed to determine the feasibility and desirability of such an effort. One option to minimize shortages of pneumococcal vaccine during a pandemic is to immunize the population now against possible future infection. However, immunization rates for elderly and high-risk groups remain below established targets, and immunization recommendations have not been expanded to include healthy children and young adults because they are at low risk for pneumococcal pneumonia during nonpandemic periods. Antiviral drugs can be used against all strains of pandemic influenza and have immediate availability both as a prophylactic to prevent illness and as a treatment if administered within 48 hours of the onset of symptoms. Studies of these drugs have shown them to be as effective as vaccines in preventing influenza infection in healthy young adults if taken under the prescribed regimen,and, when used for treatment, to shorten the duration and severity of infection. Twelve manufacturers produce antiviral drugs approved by FDA for use against influenza in the United States. These drugs vary in both their costs and their benefits, as shown in table 4. For example, the older and less expensive drugs amantadine and rimantadine have been approved for prophylaxis of all age groups against the influenza virus strains most likely to cause a pandemic. However, their side effects, particularly those of amantadine, include central nervous system disturbances, such as delirium or behavioral changes, that may preclude their use in certain populations.The newer and more expensive drugs, zanamivir and oseltamivir, have a lower incidence of side effects and are effective against a broader range of virus strains. However, as of August 2000 they had FDA approval only for treatment, not prevention. In addition, they have not been approved for use in younger age groups, and zanamivir is not recommended for certain other segments of the population.None of the antiviral drugs have been studied extensively for long-term use or in large populations. CDC historically has supported use of antiviral drugs during nonpandemic periods as an adjunct to vaccine to prevent influenza among high-risk populations in certain circumstances. Antiviral drugs may be used (1) when influenza vaccine is unavailable, (2) during the 2 to 6 weeks after inoculation until the vaccine becomes effective, and (3) for people who cannot tolerate the vaccine because of allergies or other factors. However, CDC cautions against the use of antiviral drugs in the face of the vaccine shortages expected for the 2000-01 influenza season. CDC states that even if a vaccine shortage develops, it does not support routine and widespread use of antiviral drugs to prevent influenza, because it is an untested and expensive strategy that could result in large numbers of persons experiencing adverse effects. While shortages of antiviral drugs have not been a problem in the past, HHS officials expect the amount produced will be well below demand during a pandemic. This assumption, supported by drug manufacturers, is based on the fact that current production levels of antiviral drugs are set in response to current demand, whereas demand in a pandemic is expected to increase significantly if vaccines are unavailable as a means to prevent the disease. Manufacturers told us that expanding supply to meet increased demand is possible to some extent but that the lead time required to produce at least one type of antiviral drug can be at least 6 to 9 months. Manufacturers say that knowing how much drug CDC expects them to produce for a pandemic would assist them in determining whether their existing surge capacity is sufficient, and the extent to which they would need to develop contingency plans to expand capacity even further. Both FDA and CDC started collecting data on the production capacity of antiviral drug manufacturers in May and June 2000, but data collection efforts remain incomplete. HHS has not developed contingency plans with manufacturers to expand production capacity or analyzed whether government funding to maintain ongoing manufacturer capacity is feasible or desirable. In the absence of federal decisions about drug availability and use, state officials are uncertain whether or to what extent they should include strategies that rely on antiviral drugs to prevent or treat infection until vaccine becomes available.HHS officials plan to convene an expert panel to determine how antiviral drugs should be used in the event of a pandemic or in the face of vaccine shortages. Creating a stockpile is another option to ensure availability of antiviral drugs for a pandemic. HHS has not formally evaluated whether creating a stockpile to preclude shortages is warranted and feasible.CDC officials have noted several factors that must be addressed in deciding to create a stockpile. For example, officials need to determine whether to build or rent storage facilities and where to locate them, develop a distribution system, assess the feasibility of rotating stock given the shelf-life of the drug and current market capacity, and determine how to finance the stockpile. The recent creation of the National Pharmaceutical Stockpile to help prepare for a bioterrorist attack has provided experience in these areas. This program, administered by CDC and financed by a federal appropriation of $51 million in fiscal year 1999 and $52 million in fiscal year 2000,maintains a medical stockpile considered to be adequate to respond to a bioterrorist attack but lacks all the pharmaceuticals, supplies, and equipment that may be necessary to respond to an influenza pandemic. Under this program, the Department of Veterans Affairs, as CDC’s agent, purchases drugs, supplies, and equipment, which are stored as active inventory in vendor warehouses. In developing the National Pharmaceutical Stockpile, CDC relied in part on our recent review of two other federally maintained stockpiles to assess management oversight of items in the stockpile. Inoculation with pneumococcal vaccine, which helps protect against pneumococcal pneumonia, a type of pneumonia that frequently follows influenza infection, may help reduce a substantial number of influenza- related deaths.Depending on the severity with which the disease attacks different population groups, available vaccine supplies might be needed to help protect groups other than those typically considered at risk, such as young adults. Although national mortality statistics have directly attributed about 1,000 deaths per year to influenza during the last decade, CDC attributes at least 20,000 more deaths per year to secondary infections of influenza, such as pneumonia.As shown in table 5, the numbers of deaths over and above these annual estimates of influenza-related deaths—called excess deaths—have generally been even higher during pandemics, especially during the pandemic of 1918, when antibiotics and advanced medical care to treat secondary infections were unavailable. CDC officials generally attribute about one-third of the excess deaths each year to influenza-related pneumonia, and most of these deaths are attributed to a type of bacterial pneumonia that may be prevented with the pneumococcal vaccine. The exact number of deaths caused by pneumococcal pneumonia is unknown,but HHS reports that at least in some epidemics, the disease has been responsible for up to half of influenza-related deaths. Because pneumococcal vaccine provides immunity for at least 5 to 10 years, it can provide benefit during nonpandemic as well as pandemic years. CDC reports that during nonpandemic periods, the populations most at risk for hospitalization and death due to pneumococcal disease include approximately 35 million persons aged 65 or older and approximately 33 to 39 million persons of all ages with chronic illness. Therefore, CDC recommends that pneumococcal vaccine be administered to persons in these groups. CDC officials expect shortages of pneumococcal vaccine during a pandemic because only about 7 to 9 million doses are currently produced each year,the vaccine production process takes about 8 to 9 months, and current overall immunization rates remain below target. CDC officials say that manufacturers produce vaccine according to the current demand for the product. Therefore, increasing the extent that the population is currently immunized would help preclude shortages of vaccine during a pandemic, not only by increasing production capacity, but also by reducing the number of people that remain to be immunized. In 1995 we reviewed the efforts of HHS to improve pneumococcal vaccination rates for adults aged 65 and older.As part of its response, CDC and other HHS agencies developed the Adult Immunization Action Plan, which focused efforts on raising awareness of the importance of the vaccine among clinicians, public health professionals, and the public. Specific steps include encouraging (1) health care provider organizations to revise current immunization policies and include directives clinicians can use in their practices to increase immunization, particularly among high-risk groups, and (2) accrediting organizations to urge requiring hospitals and other care facilities to adopt directives aimed at immunizing high-risk individuals. In addition, CDC has developed and disseminated brochures and other educational material for the public and health care providers that stress the health benefits of vaccination. Since CDC initiated these actions, immunization rates have increased, particularly for adults aged 65 and older. As of 1997, 43 percent of people aged 65 and older and 11 percent of younger at-risk populations have been immunized with pneumococcal vaccine. Preliminary data from 1999 indicate that the rate for those aged 65 and older has increased further to 54 percent. Despite this progress, rates remain below the HHS year 2000 goal of 60 percent for each of these noninstitutionalized populations. Moreover, the year 2010 goal for people aged 65 and older increases to 90 percent, well above the current goal. CDC officials cite the continued lack of awareness about the availability and importance of pneumococcal vaccine as the primary barrier to increasing immunization rates.Officials from all but one of the 11 states we contacted planned to expand existing programs to increase nonpandemic use of pneumococcal vaccines by such means as raising awareness among physicians and public health officials and educating the public. Shortages of pneumococcal vaccine could also be exacerbated by the fact that there may be a high need for it among people under age 65 as well as for the older population. For example, in the 1918 pandemic the influenza- related death rate for young adults was more than 3 times that for people over age 65, just the opposite of the situation in nonpandemic years, when the influenza-related death rates for those over age 65 were 8 to 15 times greater than those for younger people. Of the estimated 550,000 excess deaths for all age groups in the years 1918 and 1919, over 280,000 pneumonia deaths were reported in young adults, aged 20 to 39 years. Since 1980, those under 65 have generally accounted for less than 10 percent of the influenza-related excess deaths. However, CDC officials have estimated that in a future pandemic up to 50 percent of deaths may fall within the age group of 0 to 64 years. CDC has not estimated the number of deaths that may be prevented with pneumococcal vaccine. According to CDC officials, current recommendations for pneumococcal vaccine are unlikely to be expanded to include healthy young adults because pandemic scenarios are not considered when setting immunization policy. Federal and state pandemic response plans are in various stages of completion and do not completely or consistently address the problems related to the purchase, distribution, and administration of supplies of vaccines and antiviral drugs during a pandemic. HHS has provided interim draft guidance to facilitate state plans, but final federal decisions necessary to mitigate the effects of potential shortages have not been made.Until such decisions are made, the timeliness and adequacy of response efforts may be compromised. The federal government developed the first national pandemic plan in 1978, after the threat of a pandemic swine flu in 1976 clearly demonstrated the need for advance planning to support a mass immunization and response effort within the United States. Lessons learned from that experience, which was the government’s first attempt at immunization of the entire U.S. population, included the need for the federal government to reach agreements with private and public sector entities responsible for the timely purchase, distribution, and administration of vaccines and drugs.More recent experience with vaccine shortages also demonstrated the need for federal guidance in distributing limited quantities of vaccines and drugs to priority groups within the population. In 1993 the federal government convened a panel of experts from the public and private sectors to review and revise the pandemic response plan. As of October 2000, HHS officials directing the planning effort had not set a date to complete and distribute a revised national plan. To foster state and local pandemic planning and preparedness, CDC first issued interim planning guidance in draft form to all states in 1997, outlining general federal and state planning responsibilities. As of September 2000, 28 states were actively preparing a pandemic plan, 10 states characterized their planning efforts as in the conceptual stage, and 1 state did not comment on the stage of planning efforts, according to a recent survey by the Council of State and Territorial Epidemiologists.The remaining 11 responding officials said their states were not engaged in pandemic planning. Beginning in 1999, HHS funded 9 states with up to $13,000 each to develop plans.An additional 19 states were developing plans using other federal and state resources. Officials from 32 states said that influenza plans will be integrated with existing state plans to respond to natural or man-made disasters, such as flood or bioterrorist attack. Although to a certain extent planning efforts for other emergencies can be used for pandemic response, additional planning is important to deal with the specific aspects of pandemic response. This includes developing plans to address the wide-scale emergency needs of an entire population, including mass distribution and administration of limited vaccines and drugs with an uncertain amount of available resources. State officials say that CDC’s financial and technical assistance has greatly helped in these planning efforts. In the most recent version of its planning guidance for states, CDC lists several key federal decisions related to vaccines and antiviral drugs that have not been made. These decisions include determining the amount of vaccines and drugs that will be purchased at the federal level; the division of responsibility between the public and private sectors for the purchase, distribution, and administration of vaccines and drugs; and how population groups will be prioritized and targeted to receive limited supplies of vaccines and drugs. In each of these areas, until federal decisions are made, states will not be able to develop strategies consistent with federal action. HHS has indicated in its interim planning guidance that how vaccines and drugs will be purchased, distributed, and administered by the private and public sectors will change during a pandemic, but some decisions necessary to prepare for these expected changes have not been made. During a typical annual influenza response, influenza and pneumococcal vaccines are purchased through a combination of public and private sector funds. Vaccine and antiviral drug distribution is primarily handled directly by manufacturers through private vendors and pharmacies. About 90 percent of vaccines and antiviral drugs are administered or prescribed to the population on a first-come, first-served basis by private physicians, nurses, and other health care providers, with most states and counties participating to a relatively small extent through publicly funded programs. During a pandemic, however, HHS draft interim guidance indicates that many of these private sector responsibilities may be transferred to the public sector at the federal, state, or local level, and priority groups within the population should be established for receiving limited supplies of vaccines and drugs. For example, the draft interim guidance for state pandemic plans says that resources can be expected to be available from the national level for federal contracts to purchase influenza vaccine and at least some antiviral agents, but some state funding may be required. In addition, federal grants or reimbursement for public sector vaccine distribution and administration may be provided to states, but the draft interim guidance contains no recommendations on how the level and nature of such resources might differ in response to the severity of the pandemic. Professional organizations representing vaccine manufacturers and pharmacists have questioned the necessity of moving responsibility for distribution and administration to the public sector during a pandemic. According to these organizations, existing private systems are in place and can operate more smoothly in response to federal direction than an as-yet- to-be-defined public sector system. At least one professional organization has contacted CDC requesting to assist the federal government in expanding capacity for private sector vaccine administration. HHS has not determined the extent to which federal funding will be made available or developed more guidance for states to use in planning how to use public and private sector resources to distribute and administer vaccines and antiviral drugs. In the absence of decisions regarding the extent of federal responsibility and investment in pandemic response, however, state officials are uncertain of how much state funding will be required and what level of state response can be supported. Two say that without more detail and commitment on federal assistance they plan to respond to the pandemic using state resources alone. State officials are particularly concerned that a national plan has not finalized recommendations for how population groups should be prioritized to receive vaccines and antiviral drugs. In its most recent (1999) interim draft guidance sent to states, HHS lists eight different population groups that should be considered in establishing priorities among groups for receiving vaccines and drugs during a pandemic. The list includes such groups as health care workers and public health personnel involved in the pandemic response, persons traditionally considered to be at increased risk of severe influenza illness and mortality, and preschool and school- aged children. The interim guidance states that recommendations on the relative priority of each group are still under study and will be based on a number of factors, including the need to maintain community pandemic response capability. Other factors include limiting mortality among high-risk groups, reducing mortality in the general population, and minimizing social disruption and economic losses. HHS officials say they are still committed to publishing recommendations on the relative priority for each population group. However, the recommendations need to be flexible to recognize the different situations that could emerge. For example, officials point out that the severity with which the pandemic attacks specific population groups would have to be taken into consideration in setting priorities. State officials acknowledge the need for flexibility in planning because many aspects of a pandemic cannot be known in advance. However, these officials say that the absence of more detail regarding how and when federal recommendations will be made leaves them uncertain about how to plan for the use of limited supplies of vaccine and drugs. For example, knowing federal government recommendations under different conditions allows states to better estimate the extent to which priority groups can be vaccinated, to develop strategies to target those groups, and to determine the number of additional personnel and locations that will be needed for vaccine and drug administration. Another concern, particularly for state officials, is that without federal decisions to establish priorities for which population groups should receive the limited quantities of vaccines and drugs, inconsistencies could arise both among states and between states and the federal government. Several state officials say such policy differences among states and between states and the federal government in the use and distribution of vaccines and antiviral drugs may contribute to public confusion and social disruption, as shown by recent experience. Specifically, in 1998, after 3 of 11 children who developed meningitis died, one state initiated a mass vaccination program for people between the ages of 2 and 22 using a strategy of shared public and private sector responsibility for administering the vaccine.Surrounding states that did not have an increase in reported cases did not initiate similar programs or recommend vaccination for everyone in this age group. The differences in state recommendations caused some residents of bordering states to seek immunization for their children by crossing state lines. The intense media attention and demand for vaccinations, coupled with a perceived shortage of meningococcal vaccine, created substantial confusion in some communities as fearful parents overwhelmed private providers with phone calls and office visits, according to officials responsible for the vaccination program. While experts consider an influenza pandemic to be inevitable, no one knows when it will occur or how severe it will be. What is known is that traditional response strategies for obtaining, using, and distributing vaccines and drugs during annual influenza epidemics may be insufficient or inappropriate to control or minimize the effect of pandemic disease, particularly in its early stages, on the population and the economy. Although not much can be known about a pandemic viral strain until it appears, planning a response that relies on vaccines and drugs depends, at least in part, on knowing the amounts that can be produced and developing strategies for reaching various populations that might be at risk. Because influenza vaccine must be tailored specifically to the pandemic strain that appears, an effective response plan also depends, in part, on the ability to rapidly identify the strains that are newly infecting people and to produce influenza vaccine using alternative methods in the event existing ones cannot be used. Moreover, acting now to increase the extent to which vulnerable populations, particularly those aged 65 and older, receive pneumococcal vaccine can help protect them from the complications of influenza in the event of a pandemic. Despite recent gains in the use of vaccines, the rate of pneumococcal immunization among high-risk groups remains below established goals, indicating the need for HHS to maintain its efforts to raise awareness about the importance of this vaccine. Stronger federal leadership is needed to analyze alternative strategies to increase the availability and relative effect of vaccines and drugs among various populations. Because new strategies may replace familiar response patterns to address the unique aspects of a pandemic, advance planning is particularly important to obtain agreement on how the traditional roles and responsibilities of the public and private sector response effort are likely to change. Federal leadership, including development of a national plan that integrates strategies for the use of vaccines and antiviral drugs, is needed to address national issues as well as help harmonize the various public and private sector plans. To improve the nation’s ability to respond to the emergence of a pandemic influenza virus and help ensure an adequate and appropriate level of public protection, we recommend that the Secretary of Health and Human Services take the following actions. First, we recommend that the Secretary take steps to fill the knowledge gaps in the capability of the private and public sectors to produce, distribute, and administer vaccines and antiviral drugs to various population groups to control the spread and effect of a pandemic. Specifically, we recommend that HHS explore and evaluate alternative methods to produce and distribute influenza vaccine and strategies to help quickly identify newly detected strains of the influenza virus, identify the capability of all manufacturers to produce antiviral drugs and pneumococcal vaccines and their existing “surge capacity” to expand production as needed during a pandemic, and if existing surge capacity is insufficient, work with manufacturers to determine the investment and time required to expand production capacity, or the feasibility of creating a stockpile against projected shortages. Second, we recommend that the Secretary establish a deadline for completing and publishing a federal response plan that will address how priorities for receiving limited influenza and pneumococcal vaccines and antiviral drugs during a pandemic will be established among population groups, and how private and public sector responsibilities might change during a pandemic for the purchase, use, and distribution of influenza and pneumococcal vaccines and antiviral drugs. In commenting on our draft report, HHS agreed that the issues surrounding the production, purchase, and distribution of vaccines and antiviral drugs merit continued high priority. It discussed several initiatives under way or planned. HHS generally concurred with our recommendations. It also discussed several concerns. HHS concurred with our recommendation to improve estimates of manufacturers’ vaccine and antiviral production capacity and to develop strategies to ensure adequate production levels in the event of a pandemic. However, HHS commented that it believed the draft report inappropriately emphasized the development and use of antiviral drugs and pneumococcal vaccine over the use of pandemic influenza vaccine. HHS also stated that the wording of our recommendation in the draft report to fill knowledge gaps about vaccines and drugs placed undue and potentially misleading emphasis on the role of antiviral drugs and pneumococcal vaccines in pandemic influenza preparedness. We agree with HHS that influenza vaccine is the first line of defense against an influenza virus, but to the extent that it is in short supply, antiviral drugs and, to a lesser extent, pneumococcal vaccine become important interventions. Our recommendation was intended to include steps to enhance all three interventions, including the availability of influenza vaccine. We have expanded the recommendation to include resolving knowledge gaps surrounding influenza vaccine production and distribution. In a related comment, HHS stated that the draft did not convey the appropriate use of pneumococcal vaccine. HHS said that the availability of the vaccine will not be a major factor of the federal response plan for pandemic influenza. Rather, it stated that efforts should be directed toward increasing pneumococcal vaccination rates among high-risk groups before the health care delivery system is overwhelmed by a pandemic crisis. We agree that HHS’ strategy has merit and gave it greater prominence in the final report. In its general comments HHS stated that the draft report did not address the full range of activities it considers essential to ensure prepandemic preparedness and an adequate pandemic response capability. HHS cited as examples three important aspects of pandemic preparedness that were not addressed in the report: (1) a robust disease surveillance system, (2) the presence of community emergency preparedness protocols, and (3) good public health practices to minimize and control the spread of disease. We recognize that these factors are important aspects of pandemic preparedness and response capability. However, our work focused on the production and distribution of vaccines and drugs, which are also widely regarded as direct and critical interventions needed to help protect the population from an influenza pandemic. HHS concurred with our recommendation to establish a deadline to complete and publish a federal response plan for pandemic influenza and stated that it will keep the Congress informed of the proposed timetable and progress toward the milestones established. HHS also agreed that the plan needs to include key decisions such as those related to the private and public sector responsibilities for vaccine purchase and delivery. HHS said that it is working to create a flexible plan that will accommodate a wide variety of contingencies. HHS’ comments are reprinted in appendix I. It also provided technical comments, which we incorporated in the report as appropriate. As agreed with your offices, unless you publicly release its contents earlier, we will make no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Honorable Donna E. Shalala, Secretary of Health and Human Services; the Honorable Jeffrey Koplan, Director, Centers for Disease Control and Prevention; and other interested parties. We also will make copies available to others on request. This report was prepared by Frank Pasquier, Lacinda Baumgartner, Evan Stoll, and Cheryl Williams. If you or your staffs have any questions, please contact me at (202) 512-7119. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
Public health experts have raised concerns about the ability of the nation's public health system to detect and respond to emerging infectious disease threats, such as pandemic influenza. Although vaccines are considered the first line of defense to prevent or reduce influenza-related illness and death, GAO found that they may be unavailable, in short supply, or ineffective for some portions of the population during the first wave of a pandemic. Federal and state influenza pandemic plans are in various stages of completion and do not completely or consistently address key issues surrounding the purchase, distribution, and administration of vaccines and antiviral drugs. Inconsistencies in state and federal policies could contribute to public confusion and weaken the effectiveness of the public health response.
The HUBZone program was established by the HUBZone Act of 1997 to stimulate economic development through increased employment and capital investment by providing federal contracting preferences to small businesses in economically distressed communities or HUBZone areas. The types of areas in which HUBZones may be located are defined by law and consist of the following: Qualified census tracts. A qualified census tract has the meaning given the term by Congress for the low-income-housing tax credit program. The list of qualified census tracts is maintained and updated by the Department of Housing and Urban Development (HUD). As currently defined, qualified census tracts have either 50 percent or more of their households with incomes below 60 percent of the area median gross income or have a poverty rate of at least 25 percent. The population of all census tracts that satisfy one or both of these criteria cannot exceed 20 percent of the area population. Qualified census tracts may be in metropolitan or nonmetropolitan areas. HUD designates qualified census tracts periodically as new decennial census data become available or as metropolitan area definitions change. Qualified nonmetropolitan counties. Qualified nonmetropolitan counties are those that, based on the most recent decennial census data, are not located in a metropolitan statistical area and in which 1. the median household income is less than 80 percent of the nonmetropolitan state median household income; 2. the unemployment rate is not less than 140 percent of the average unemployment rate for either the nation or the state (whichever is lower); or 3. a difficult development area is located. The definition of a difficult development area is similar to that of a qualified census tract in that it comes from the tax code’s provision for the low-income-housing tax credit program. For the low-income-housing tax credit program, difficult development areas can be located in both metropolitan and nonmetropolitan counties; however, for the HUBZone program, they can only be located in nonmetropolitan counties in Alaska, Hawaii, and the U.S. territories and possessions. Qualified Indian reservations. A HUBZone-qualified Indian reservation has the same meaning as the term Indian Country as defined in another federal statute, with some exceptions. These are all lands within the limits of any Indian reservation, all dependent Indian communities within U.S. borders, and all Indian allotments. In addition, portions of the state of Oklahoma qualify because they meet the Internal Revenue Service’s definition of “former Indian reservations in Oklahoma.” Redesignated areas. Redesignated areas are census tracts or nonmetropolitan counties that no longer meet the economic criteria but remain eligible until after the release of the 2010 decennial census data. Base closure areas. Areas within the external boundaries of former military bases that were closed by the Base Realignment and Closure Act (BRAC) qualify for HUBZone status for a 5-year period from the date of formal closure. In order for a firm to be certified to participate in the HUBZone program, it must meet the following criteria: the company must be small by SBA size standards; the company must be at least 51 percent owned and controlled by U.S. citizens; the company’s principal office—the location where the greatest number of employees perform their work—must be located in a HUBZone; and at least 35 percent of the company’s full-time (or full-time equivalent) employees must reside in a HUBZone. As of February 2008, 12,986 certified firms participated in the HUBZone program (see fig. 1). Over 4,200 HUBZone firms obtained approximately $8.1 billion in federal contracts in fiscal year 2007. A certified HUBZone firm is eligible for federal contracting benefits, including “sole source” contracts, set-aside contracts, and a price evaluation preference. A contracting officer can award a sole source contract to a HUBZone firm if, among other things, the officer does not have a reasonable expectation that two or more qualified HUBZone firms will submit offers and the anticipated award price of the proposed contract, including options, will not exceed $5.5 million for manufacturing contracts or $3.5 million for all other contracts. If a contracting officer has a reasonable expectation that at least two qualified HUBZone firms will submit offers and an award can be made at a fair market price, the contract shall be awarded on the basis of competition restricted to qualified HUBZone firms. Contracting officers also can award a contract to a HUBZone firm through “full and open competition.” In these circumstances, HUBZone firms are given a price evaluation preference of up to 10 percent if the apparent successful offering firm is not a small business. That is, the price offered by a qualified HUBZone firm shall be deemed as lower than the price offered by another firm (other than another small business) if the price is not more than 10 percent higher than the price offered by the firm with the lowest offer. As of October 1, 2000, all federal agencies were required to meet the HUBZone program’s contracting goals. Currently, the annual federal contracting goal for HUBZone small businesses is 3 percent of all prime contract awards—contracts awarded directly by an agency. In the HUBZone Act of 1997, Congress increased the overall federal contracting goal for small businesses from 20 percent to 23 percent to address concerns that the HUBZone contracting requirement would reduce federal contracts for non-HUBZone small businesses. Each year, SBA issues a small business goaling report that documents each department’s achievement of small business contracting goals. SBA administers the HUBZone program, and the HUBZone program office at SBA headquarters is responsible for certifying firms, publishing a list of HUBZone-certified firms, monitoring certified firms to ensure continuing eligibility, and decertifying firms that no longer meet eligibility requirements. A HUBZone liaison at each of SBA’s 68 district offices is responsible for conducting program examinations—investigations that verify the accuracy of information supplied by firms during the certification process, as well as current eligibility status. HUBZone liaisons also handle program marketing and outreach to the economic development and small business communities. Federal agencies are responsible for trying to meet the HUBZone contracting goal and for enforcing the contracts awarded to HUBZone firms. Each federal agency has an Office of Small and Disadvantaged Business Utilization (OSDBU), or an equivalent office, that helps the agency employ special contracting programs and monitor the agency’s overall small business and special contracting goals. In addition to the HUBZone program, SBA has other contracting assistance programs. The 8(a) program is a business development program for firms owned by citizens who are socially and economically disadvantaged. SBA provides technical assistance, such as business counseling, to these firms. While the 8(a) program offers a broad range of assistance to socially and economically disadvantaged firms, the Small Disadvantaged Business (SDB) program is intended only to convey benefits in federal procurement to disadvantaged businesses. All 8(a) firms automatically qualify for SDB certification, and federal agencies are subject to an annual SDB contracting goal of 5 percent of all federal contracting dollars. Small businesses also can be certified as service- disabled veteran-owned, and the contracting goal for these firms is 3 percent of all federal contracting dollars. SBA relies on federal law to identify qualified HUBZone areas, but its HUBZone map is inaccurate and the economic characteristics of HUBZone areas vary widely. The map that SBA uses to publicize HUBZone areas contains ineligible areas and has not been updated to include eligible areas. As a result, ineligible small businesses have participated in the program, and eligible businesses have not been able to participate. A series of statutory changes has resulted in an increase in the number and types of HUBZone areas. HUBZone program officials noted that such an expansion could diffuse (or limit) the economic benefits of the program. We found that different types of HUBZone areas varied in the degree to which they could be characterized as economically distressed (as measured by indicators such as poverty and unemployment rates). In recent years, amendments to the HUBZone Act and other statutes have increased the number and type of HUBZone areas. The original HUBZone Act of 1997 defined a HUBZone as any area within a qualified census tract, a qualified nonmetropolitan county, or lands within the boundaries of a federally recognized Indian reservation. Qualified census tracts were defined as having the meaning given the term in the tax code at the time— areas in which 50 percent or more of the households had incomes below 60 percent of the area median gross income. Qualified nonmetropolitan areas were counties with low median household income or high levels of unemployment. However, subsequent legislation revised the definitions of the original categories and expanded the HUBZone definition to include new types of qualified areas (see fig. 2). A 2000 statute (1) defined Indian reservation to include lands covered by the Bureau of Indian Affairs’ phrase Indian Country and (2) allowed all lands within the jurisdictional areas of an Oklahoma Indian tribe to be eligible for the program. The 2000 statute also amended the HUBZone area definition to allow census tracts or nonmetropolitan counties that ceased to be qualified to remain qualified for a further 3-year period as “redesignated areas.” Also in 2000, Congress changed the definition of a qualified census tract in the tax code by adding a poverty rate criterion; that is, a qualified census tract could be either an area of low income or high poverty. A 2004 statute revised the definition of redesignated areas to permit them to remain qualified until the release date of the 2010 census data. In that same statute, Congress determined that areas within the external boundaries of former military bases closed by BRAC would qualify for HUBZone status for a 5-year period from the date of formal closure. In addition, Congress revised the definition of qualified nonmetropolitan counties to permit eligibility based on a county’s unemployment rate relative to either the state or the national unemployment rate, whichever was lower. Finally, in 2005, Congress expanded the definition of qualified nonmetropolitan county to include “difficult development areas” in Alaska, Hawaii, and the U.S. territories. These areas have high construction, land, and utility costs relative to area median income. Subsequent to the statutory changes, the number of HUBZone areas grew from 7,895 in calendar year 1999 to 14,364 in 2006. As shown in figure 2, the December 15, 2000, change to the definition of a qualified census tract—a provision of the low-income-housing tax credit program— resulted in the biggest increase in the number of qualified HUBZone areas. SBA’s data show that, as of 2006, there were 12,218 qualified census tracts, 1,301 nonmetropolitan counties, 651 Indian Country areas, 82 BRAC areas, and 112 difficult development areas (see fig. 3). SBA program staff employ no discretion in identifying HUBZone areas because the areas are defined by federal statute, but SBA has not always designated these areas correctly on the SBA Web map. To identify and map HUBZone areas, SBA relies on a mapping contractor and data from other executive agencies (see fig. 4). When a HUBZone designation changes or more current data become available, SBA alerts the contractor. The contractor retrieves the data from the designated federal agencies, such as HUD, the Bureau of Labor Statistics (BLS), and the Census Bureau. Most HUBZone area designation data are publicly available (and widely used by researchers and the general public), with the exception of the Indian Country designation. Once the changes to the HUBZone areas are mapped, the contractor sends the maps back to SBA. SBA performs a series of checks to ensure that the HUBZone areas are mapped correctly and then the contractor places the maps and associated HUBZone area information on SBA’s Web site. Essentially, the map is SBA’s primary interface with small businesses to determine if they are located in a HUBZone and can apply for HUBZone certification. SBA officials stated that they primarily rely on firms to identify HUBZone areas that have been misidentified or incorrectly mapped. Based on client input, SBA estimated that from 1 percent to 2 percent of firms searching the map as part of the application process report miscodings. SBA’s mapping contractor researches these claims each month. During the course of our review, we identified two problems with SBA’s HUBZone map. First, the map includes some areas that do not meet the statutory definition of a HUBZone area. As noted previously, counties containing difficult development areas are only eligible in their entirety for the HUBZone program if they are not located in a metropolitan statistical area. However, we found that SBA’s HUBZone map includes 50 metropolitan counties as difficult development areas that do not meet this or any other criteria for inclusion as a HUBZone area. Nearly all of these incorrectly designated HUBZone areas are in Puerto Rico. When we raised this issue with SBA officials, they told us they had provided a definition of difficult development areas that was consistent with the statutory language used by the agency’s mapping contractor in December 2005. However, according to SBA, the mapping contactor failed to properly follow SBA’s guidance when adding difficult development areas to the map in 2006. According to SBA officials, the agency is in the process of acquiring additional mapping services and will immediately re-evaluate all difficult development areas once that occurs. As a result of these errors, ineligible firms have obtained HUBZone certification and received federal contracts. As of December 2007, there were 344 certified HUBZone firms located in ineligible areas in these 50 counties. Further, from October 2006 through March 2008, federal agencies obligated about $5 million through HUBZone set-aside contracts to 12 firms located in these ineligible areas. Second, while SBA’s policy is to have its contractor update the HUBZone map as needed, the map has not been updated since August 2006. Since that time, additional data such as unemployment rates from BLS have become available. According to SBA officials, the update was delayed because SBA awarded the contract for management of the HUBZone system to a new prime contractor, which is still in the process of establishing a relationship with the current mapping subcontractor. Although SBA officials told us they are working to have the contractor update the mapping system, no subcontract was in place as of May 2008. While an analysis of the 2008 list of qualified census tracts showed that the number of tracts had not changed since the map was last updated, our analysis of 2007 BLS unemployment data indicated that 27 additional nonmetropolitan counties should have been identified on the map. Because firms are not likely to receive information on the HUBZone status of areas from other sources, firms in the 27 areas would have believed from the map that they were ineligible to participate in the program and could not benefit from contracting incentives that certification provides. Having an out-of-date map led SBA, in one instance, to mistakenly identify a HUBZone area. When asked by a congressman to research whether Jackson County, Michigan, qualified in its entirety as a HUBZone area, an SBA official used a manual process to determine the county’s eligibility because the map was out of date. The official mistakenly concluded that the county was eligible. After that determination, the congressman publicized Jackson County’s status, but SBA, after further review, had to rescind its HUBZone status 1 week later. Had the information been processed under the standard mapping procedures, the mapping system software would have identified the area as a metropolitan county and noted that it did not meet the criteria to be a HUBZone, as only nonmetropolitan counties qualify in their entirety. In this case, the lack of regular updates led to program officials using a manual process that resulted in an incorrect determination. Qualified HUBZone areas experience a range of economic conditions. HUBZone program officials told us that the growth in the number of HUBZone areas is a concern for two reasons. First, they stated that expansion can diffuse the impact or potential impact of the program on existing HUBZone areas. Specifically, they noted that as the program becomes less targeted and contracting dollars more dispersed, the program could have less of an impact on individual HUBZone areas. We recognize that establishing new HUBZone areas can potentially provide economic benefits for these areas by helping them attract firms that make investments and employ HUBZone residents. However, diffusion—less targeting to areas of greatest economic distress—could occur with such an expansion. Based on 2000 census data, about 69 million people (out of 280 million nationwide) lived in the more than 14,000 HUBZones. Considering that HUBZone firms are encouraged to locate in HUBZone areas and compete for federal contracts (thus facilitating employment and investment growth), the broad extent of eligible areas can lessen the very competitive advantage that businesses may rely on to thrive in economically distressed communities. Second, while HUBZone program officials thought that the original designations resulted in HUBZone areas that were economically distressed, they questioned whether some of the later categories—such as redesignated and difficult development areas— met the definition of economic distress. To determine the economic characteristics of HUBZones, we compared different types of HUBZone areas and analyzed various indicators associated with economic distress. We found a marked difference in the economic characteristics of two types of HUBZone areas: (1) census tracts and nonmetropolitan counties that continue to meet the eligibility criteria and (2) the redesignated areas that do not meet the eligibility criteria but remain statutorily eligible until the release of the 2010 census data. For example, approximately 60 percent of metropolitan census tracts (excluding redesignated tracts) had a poverty rate of 30 percent or more, while approximately 4 percent of redesignated metropolitan census tracts had a poverty rate of 30 percent or more (see fig. 5). In addition, about 75 percent of metropolitan census tracts (excluding redesignated tracts) had a median household income that was less than 60 percent of the metropolitan area median household income; in contrast, about 10 percent of redesignated metropolitan census tracts met these criteria. (For information on the economic characteristics of nonmetropolitan census tracts, see app. III.) Similarly, we found that about 46 percent of nonmetropolitan counties (excluding redesignated counties) had a poverty rate of 20 percent or more, while 21 percent of redesignated nonmetropolitan counties had a poverty rate of 20 percent or more (see fig. 6). Also, about 54 percent of nonmetropolitan counties (excluding redesignated counties) had a median housing value that was less than 80 percent of the state nonmetropolitan median housing value; in contrast, about 32 percent of redesignated counties met these criteria. Overall, difficult development areas appear to be less economically distressed than metropolitan census tracts and nonmetropolitan counties (see fig. 7). For example, 6 of 28 difficult development areas (about 21 percent) had poverty rates of 20 percent or more. In contrast, about 93 percent of metropolitan census tracts (excluding redesignated areas) and about 46 percent of nonmetropolitan counties (excluding redesignated areas) met this criterion. See appendix III for additional details on the economic characteristics of Indian Country areas and additional analyses illustrating the economic diversity among qualified HUBZone areas. In expanding the types of HUBZone areas, the definition of economic distress has been broadened to include measures that were not in place in the initial statute. For example, one new type of HUBZone area—difficult development areas—consists of areas with high construction, land, and utility costs relative to area income, and such areas could include neighborhoods not normally considered economically distressed. As a result, the expanded HUBZone criteria now allow for HUBZone areas that are less economically distressed than the areas that were initially designated. Such an expansion could diffuse the benefits to be derived from steering businesses to economically distressed areas. The policies and procedures upon which SBA relies to certify and monitor firms provide limited assurance that only eligible firms participate in the HUBZone program. Internal control standards for federal agencies state that agencies should document and verify information that they collect on their programs. However, SBA obtains supporting documentation from firms in limited instances and rarely conducts site visits to verify the information that firms provide in their initial application and during periodic recertifications—a process through which SBA can monitor firms’ continued eligibility. In addition, SBA does not follow its own policy of recertifying all firms every 3 years—which can lengthen the time a firm goes unmonitored and its eligibility is unreviewed—and has a backlog of more than 4,600 firms to recertify. Furthermore, SBA largely has not met its informal goal of 60 days for removing firms deemed ineligible from its list of certified firms. We found that of the more than 3,600 firms that were proposed for decertification in fiscal years 2006 and 2007, more than 1,400 were not processed within 60 days. As a result, there is an increased risk that ineligible firms may participate in the program and have opportunities to receive federal contracts based on HUBZone certification. To certify and recertify HUBZone firms, SBA relies on data that firms enter in its online application system; however, the agency largely does not verify the self-reported information. The certification and recertification processes are similar. Firms apply for HUBZone certification using an online application system, which employs automated logic steps to screen out ineligible firms based on the information entered on the application. For example, firms enter information such as their total number of employees and number of employees that reside in a HUBZone. Based on this information, the system then calculates whether the number of employees residing in a HUBZone equals 35 percent or more of total employees, the required level for HUBZone eligibility. HUBZone program staff review the applications to determine if more information is required. While SBA’s policy states that supporting documentation normally is not required, it notes that agency staff may request and consider such documentation, as necessary. No specific guidance or criteria are provided to program staff for this purpose; rather, the policy allows staff to determine what circumstances warrant a request for supporting documentation. In determining whether additional information is required, HUBZone program officials stated that they generally consult sources such as firms’ or state governments’ Web sites that contain information on firms incorporated in the state. In addition, HUBZone program officials stated that they can check information such as a firm’s address using the Central Contractor Registration (CCR) database. According to HUBZone program officials, they are in the process of obtaining Dun and Bradstreet’s company information (such as principal address, number of employees, and revenue) to cross-check some application data. While these data sources are used as a cross- check, the data they contain are also self-reported. The number of applications submitted by firms grew by more than 40 percent from fiscal year 2000 to fiscal year 2007, and the application approval rate varied. For example, as shown in table 1, 1,527 applications were submitted in fiscal year 2000, and SBA approved 1,510 applications (about 99 percent). In fiscal year 2007, 2,204 applications were submitted, and SBA approved 1,721 (about 78 percent). Of the 2,204 applications submitted in fiscal year 2007, 383 (about 17 percent) were withdrawn. Either the firms themselves or SBA staff can withdraw an application if it is believed the firm will not meet program requirements. HUBZone program staff noted that they withdraw applications for firms that could, if they made some minor modifications, be eligible. Otherwise, firms would have to wait 1 year before they could reapply. The remaining 100 applications (about 5 percent) submitted in fiscal year 2007 were declined because the firms did not meet the HUBZone eligibility requirements. See appendix IV for details on the characteristics of current HUBZone firms. To ensure the continued eligibility of certified HUBZone firms, SBA requires firms to resubmit an application. That is, to be recertified, firms re-enter information in the online application system, and HUBZone program officials review it. In 2004, SBA changed the recertification period from an annual recertification to every 3 years. According to HUBZone program officials, they generally limit their reviews to comparing resubmitted information to the original application. The officials added that significant changes from the initial application can trigger a request for additional information or documentation. If concerns about eligibility are raised during the recertification process, SBA will propose decertification or removal from the list of eligible HUBZone firms. Firms that are proposed for decertification can challenge that proposed outcome through a due-process mechanism. SBA ultimately decertifies firms that do not challenge the proposed decertification and those that cannot provide additional evidence that they continue to meet the eligibility requirements. For example, as shown in table 2, SBA began 3,278 recertifications in fiscal year 2006 and had completed decertification of 1,699 firms as of January 22, 2008. Although SBA does not systematically track the reasons why firms are decertified, HUBZone program officials noted that many firms do not respond to SBA’s request for updated information. We discuss this issue and others related to the timeliness of the recertification and decertification processes later in this report. We found that SBA verifies the information it receives from firms in limited instances. In accord with SBA’s policy, HUBZone program staff request documentation from firms and conduct site visits when they feel it is warranted. The HUBZone Certification Tracking System does not readily provide information on the extent to which SBA requests documentation from firms or conducts site visits; therefore, we conducted reviews of applications and recertifications. Specifically, we reviewed the 125 applications and 15 recertifications submitted or begun in September 2007. For the applications submitted in September 2007, HUBZone program staff requested additional information but not supporting documentation for 10 (8 percent) of the applications; requested supporting documentation for 45 (36 percent) of the applications; and conducted one site visit. After reviewing supporting documentation for the 45 applications, SBA ultimately approved 19 (about 42 percent). Of the remaining 26 applications, 21 (about 47 percent of the 45 applications) were withdrawn by either SBA or the firm, and 5 (about 11 percent of the 45 applications) were denied by SBA. For the 15 firms that SBA began recertifying in September 2007, HUBZone program staff requested information and documentation from 2 firms and did not conduct any site visits. In the instances when SBA approved an application without choosing to request additional information or documentation (about 50 percent of our application sample), HUBZone program staff generally recorded in the HUBZone system that their determination was based on the information in the application and that SBA was relying on the firm’s certification that all information was true and correct. In requesting additional information, HUBZone staff asked such questions as the approximate number of employees and type of work performed at each of the firm’s locations. When requesting supporting documentation, HUBZone staff requested items such as copies of driver’s licenses or voter’s registration cards for the employees that were HUBZone residents and a rental/lease agreement or deed of trust for the principal office. Internal control standards for federal agencies and programs require that agencies collect and maintain documentation and verify information to support their programs. The documentation also should provide evidence of accurate and appropriate controls for approvals, authorizations, and verifications. For example, in addition to automated edits and checks, conducting site visits to physically verify information provided by firms can help control the accuracy and completeness of transactions or other events. According to HUBZone program officials, they did not more routinely verify the information because they generally relied on their automated processes and status protest process. For instance, they said they did not request documentation to support each firm’s application because the application system employs automated logic steps to screen out ineligible firms. For example, as previously noted, the application system calculates the percentage of a firm’s employees that reside in a HUBZone and screens out firms that do not meet the 35 percent requirement. But the automated application system would not necessarily screen out applicants that submit false information to obtain a HUBZone certification. HUBZone program officials also stated that it is not necessary to conduct site visits of HUBZone firms because firms self-police the program through the HUBZone status protest process. However, relatively few protests have occurred in recent years. In addition, officials from SBA’s HUBZone office did not indicate a reliable mechanism HUBZone firms could use to identify information that could be used in a status protest. For example, it is unclear how a firm in one state would know enough about a firm in another state, such as its principal office location or employment of HUBZone residents, to question its qualified HUBZone status. Rather than obtaining supporting documentation during certification and recertification on a more regular basis, SBA waits until it is conducting examinations of a small percentage of firms to consistently request supporting documentation. The 1997 statute that created the HUBZone program authorized SBA to conduct program examinations of HUBZone firms. Since fiscal year 2004, SBA’s policy has been to conduct program examinations on 5 percent of firms each year. Over the years, SBA has developed a standard process for conducting these examinations. SBA uses three selection factors to determine which firms will be examined each year. After firms have been selected for a program examination, SBA field staff request documentation from them to support their continued eligibility for the program. For instance, they request documents such as payroll records to evaluate compliance with the requirement that 35 percent or more of employees reside in a HUBZone and documents such as organization charts and lease agreements to document that the firm’s principal office is located in a HUBZone. After reviewing this documentation, the field staff recommend to SBA headquarters whether the firm should remain in the program. As shown in table 3, in fiscal years 2004 through 2006 nearly two-thirds of firms SBA examined were decertified, and in fiscal year 2007, 430 of 715 firms (about 60 percent) were decertified or proposed for decertification. The number of firms decertified includes firms that the agency determined to be ineligible, and were decertified, and firms that requested to be decertified. Because SBA limits its program examinations to 5 percent of firms each year, firms can be in the program for years without being examined. For example, we found that 2,637 of the 3,348 firms (approximately 79 percent) that had been in the program for 6 years or more had not been examined. In addition to performing program examinations on a limited number of firms, HUBZone program officials rarely conduct site visits during program examinations to verify a firm’s information. When reviewing the 11 program examinations that began in September 2007, we found that SBA did not conduct any site visits to verify the documentation provided. As a result of SBA’s limited application of internal controls when certifying and monitoring HUBZone firms, the agency has limited assurances that only eligible firms participated in the program. By not obtaining documentation and conducting site visits on a more routine basis during the certification process, SBA cannot be sure that only eligible firms are part of the program. And while SBA’s examination process involves a more extensive review of documentation, it cannot be relied upon to ensure that only eligible firms participate in the program because it involves only 5 percent of firms in any given year. As previously noted, since 2004, SBA’s policies have required the agency to recertify all HUBZone firms every 3 years. Recertification presents another opportunity for SBA to review information from firms and thus help monitor program activity. However, SBA has failed to recertify 4,655 of the 11,370 firms (more than 40 percent) that have been in the program for more than 3 years. Of the 4,655 firms that should have been recertified, 689 have been in the program for more than 6 years. SBA officials stated that the agency lacked sufficient staff to comply with its recertification policy. According to SBA officials, staffing levels have been relatively low in recent years. In fiscal year 2002, the HUBZone program office, which is located in SBA headquarters in Washington, D.C., had 12 full-time equivalent staff. By fiscal year 2006, the number had dropped to 8 and remained at that level as of March 2008. Of the 8, 3 conduct recertifications on a part-time basis. SBA hired a contractor in December 2007 to help conduct recertifications, using the same process that SBA staff currently use. According to the contract, SBA estimates that the contractor will conduct 3,000 recertifications in fiscal year 2008; in subsequent years, SBA has the option to direct the contractor to conduct, on average, 2,450 recertifications annually for the next 4 years. Although SBA has contracted for these additional resources, the agency lacks specific time frames for eliminating the backlog. As a result of the backlog, the periods during which some firms go unmonitored and are not reviewed for eligibility are longer than SBA policy allows, increasing the risk that ineligible firms may be participating in the program. While SBA policies for the HUBZone program include procedures for certifications, recertifications, and program examinations, they do not specify a time frame for processing decertifications—which occur subsequent to recertification reviews or examinations and determine that firms are no longer eligible to participate in the HUBZone program. If SBA suspects that a firm no longer meets standards or fails to respond to notification of a recertification or program examination, SBA makes a determination and, if found ineligible, removes the firm from its list of certified HUBZone firms. Although SBA does not have written guidance for the decertification time frame, the HUBZone program office negotiated an informal (unwritten) goal of 60 days with the SBA Inspector General (IG) in 2006. In recent years, SBA ultimately decertified the vast majority of firms proposed for decertification but, as shown in table 4, has not met its 60- day goal consistently. From fiscal years 2004 through 2007, SBA failed to resolve proposed decertifications within its goal of 60 days for more than 3,200 firms. However, SBA’s timeliness has improved. For example, in 2006, SBA did not resolve proposed decertifications in a timely manner for more than 1,000 firms (about 44 percent). In 2007, over 400 (or about 33 percent) were not resolved in a timely manner. SBA staff acknowledged that lags in processing decertifications were problematic and attributed them to limited staffing. SBA plans to use its contract staff to address this problem after the backlog of recertifications is eliminated. In addition, we and the SBA Inspector General found that SBA does not routinely track the reasons why firms are decertified. According to SBA officials, a planned upgrade to the HUBZone data system will allow SBA to track this information. While SBA does not currently track the specific reasons why firms are decertified, our analysis of HUBZone system data shows that firms were primarily decertified because firms either did not submit the recertification form or did not respond to SBA’s notification. According to HUBZone officials, firms may fail to respond because they are no longer in business or are no longer interested in participating in the program. But firms also may not be responding because they no longer meet the eligibility requirements. Tracking the various reasons why firms are decertified could help SBA take appropriate action against firms that misrepresent their HUBZone eligibility status. While we were unable to determine how many firms were awarded HUBZone contracts after they were proposed for decertification, our analysis showed that 90 of the firms proposed for decertification in fiscal years 2004 through 2007 received HUBZone set-aside dollars after being decertified. However, some of these firms may have been awarded the contracts before they were decertified. As a consequence of generally not meeting its 60-day goal, lags in the processing of decertifications have increased the risk of ineligible firms participating in the program. SBA has taken limited steps to assess the effectiveness of the HUBZone program. While SBA has a few performance measures in place that provide some data on program outputs, such as the number of certifications and examinations, the measures do not directly link to the program’s mission. SBA has plans for assessing the program’s effectiveness but has not devoted resources to implement such plans. Although Congress’s goal is for agencies to award 3 percent of their annual contracting dollars to qualifying firms located in HUBZones, most federal agencies did not meet the goal for fiscal year 2006—the total for federal agencies reached approximately 2 percent. Factors such as conflicting guidance on how to consider the various small business programs when awarding contracts and a lack of HUBZone firms with the necessary expertise may have affected the ability of federal agencies to meet their HUBZone goals. While SBA has some measures in place to assess the performance of the HUBZone program, the agency has not implemented its plans to conduct an evaluation of the program’s benefits. According to the Government Performance and Results Act (GPRA) of 1993, federal agencies are required to identify results-oriented goals and measure performance toward the achievement of their goals. We have previously reported on the attributes of effective performance measures. We noted that for performance measures to be useful in assessing program performance, they should be linked or aligned with program goals and cover the activities that an organization is expected to perform to support the intent of the program. We reviewed SBA’s performance measures for the HUBZone program and found that although the measures related to the core activity of the program (providing federal contracting assistance), they were not directly linked to the program’s mission of stimulating economic development and creating jobs in economically distressed communities. According to SBA’s fiscal year 2007 Annual Performance Report, the three performance measures were: number of small businesses assisted (which SBA defines as the number of applications approved and the number of recertifications processed), annual value of federal contracts awarded to HUBZone firms, and number of program examinations completed. The three measures provide some data on program activity, such as the number of certifications and program examinations and contract dollars awarded to HUBZone firms. However, they do not directly measure the program’s effect on firms (such as growth in employment or changes in capital investment) or directly measure the program’s effect on the communities in which the firms are located (for instance, changes in median household income or poverty levels). While SBA’s performance measures for the HUBZone program do not link directly to the program’s mission, the agency has made attempts to assess the effect of the program on firms. In fiscal years 2005 and 2006, SBA conducted surveys of HUBZone firms. According to SBA data on the surveys, HUBZone firms responding to the 2005 survey reported they had hired a total of 11,461 employees as a result of their HUBZone certification, and HUBZone firms responding to the 2006 survey reported they had hired a total of 12,826 employees (see table 5). Based on the firms that responded to the 2005 survey, the total capital investment increase in HUBZone firms as a result of firm certification was approximately $523.8 million as of August 31, 2005. As of September 12, 2006, the total capital investment increase based on firms responding to the 2006 survey was approximately $372.6 million. SBA did not conduct this survey in fiscal year 2007, but officials stated that they planned to conduct a similar survey during fiscal year 2008. However, the survey results have several limitations. For instance, the 2005 and 2006 surveys appear to have had an approximate response rate of 33 percent and 27 percent, respectively, which may increase the risk that survey results are not representative of all HUBZone firms. It also is unclear whether the survey results were reliable because SBA did not provide detailed guidance on how to define terms such as capital investment, which may have led to inconsistent responses. Finally, while the surveys measured increased employment and capital investment by firms—which provided limited assessment of, and could be linked to, the program’s effect on individual firms—they did not provide data that showed the effect of the program on the communities in which they were located. Since the purpose of the HUBZone program is to stimulate economic development in economically distressed communities, useful performance measures should be linked to this purpose. Similarly, the Office of Management and Budget (OMB) noted in its 2005 Program Assessment Rating Tool (PART) that SBA needed to develop baseline measures for some of its HUBZone performance measures and encouraged SBA to focus on more outcome-oriented measures that more effectively evaluate the results of the program. Although OMB gave the HUBZone program an assessment rating of “moderately effective,” it stated that SBA had limited data on, and had conducted limited assessments of, the program’s effect. The assessment also emphasized the importance of systematic evaluation of the program as a basis for programmatic improvement. The PART assessment also documented plans that SBA had to conduct an analysis of the economic impact of the HUBZone program on a community-by-community basis using data from the 2000 and 2010 decennial census. SBA stated its intent to assess the program’s effect in individual communities by comparing changes in socioeconomic data over time. Variables that the program office planned to consider included median household income, average educational levels, and residential/commercial real estate values. Additionally, in a mandated 2002 report to Congress, SBA identified potential measures to more effectively assess the HUBZone program. These measures included assessing full- time jobs created in HUBZone areas and the larger areas of which they were a part, the amount of investment-related expenditures in HUBZone areas and the larger areas of which they were a part, and changes in construction permits and home loans in HUBZone areas. While SBA has recognized the need to assess the results of the HUBZone program, SBA officials indicated that the agency has not devoted resources to implement either of these strategies for assessing the results of the program. Yet by not evaluating the HUBZone program’s benefits, SBA lacks key information that could help it better manage the program and inform Congress of its results. We also conducted site visits to four HUBZone areas (Lawton, Oklahoma; Lowndes County, Georgia; and Long Beach and Los Angeles, California) to better understand to what extent stakeholders perceived that the HUBZone program generated benefits. For all four HUBZone areas, the perceived benefits of the program varied, with some firms indicating they have been able to win contracts and expand their firms and others indicating they had not realized any benefits from the program. Officials representing economic development entities varied in their knowledge of the program, with some stating they lacked information on the program’s effect that could help them inform small businesses of its potential benefits. (See appendix V for more information on our site visits.) Although contracting dollars awarded to HUBZone firms have increased since fiscal year 2003—when the statutory goal of awarding 3 percent of federally funded contract dollars to HUBZone firms went into effect— federal agencies collectively still have not met that goal. According to data from SBA’s goaling reports, for fiscal years 2003 through 2006, the percentage of prime contracting dollars awarded to HUBZone firms increased but was still about one-third short of the statutory goal for fiscal year 2006 (see table 6). In fiscal year 2006, 8 of 24 federal agencies met their HUBZone goals. Of the 8 agencies, 4 had goals higher than the 3 percent requirement and were able to meet the higher goals. Of the 16 agencies not meeting their HUBZone goal, 10 awarded less than 2 percent of their small-business- eligible contracting dollars to HUBZone firms. According to SBA’s most recent guidance on the goaling process, agencies are required to submit a report explaining why goals were not met, along with a plan for corrective action. Federal agencies may not have met their HUBZone goals for various reasons, which include uncertainty about how to properly apply federal contracting preferences. For instance, federal contracting officials reported facing conflicting guidance about the order in which the various small business programs—the HUBZone program, the 8(a) program, and the service-disabled veteran-owned small business program—should be considered when awarding contracts. The 2007 Report of the Acquisition Advisory Panel concluded that contracting officers need definitive guidance on the priority for applying the various small business contracting preferences to specific acquisitions. The report stated that each program has its own statutory and regulatory requirements. It also noted that both SBA and the Federal Acquisition Regulatory Council (FAR Council) have attempted to interpret these provisions but that their respective regulations conflict with each other. According to the report, in general, SBA’s regulations provide for parity among most of the programs and give discretion to the contracting officer by stating that the contracting officer should consider setting aside the requirement for 8(a), HUBZone, or service-disabled veteran-owned firms’ participation before considering setting aside the requirement as a small business set-aside. However, according to the report, the FAR currently conflicts with SBA’s regulations by providing that, before deciding to set aside an acquisition for small businesses, HUBZone firms, or service-disabled veteran-owned small firms, the contracting officer should review the acquisition for offering under the 8(a) program. Officials at three of the four agencies we interviewed (Commerce, DHS, and SSA) regarding the awarding of contracts to small businesses stated that contracting officers occasionally faced uncertainty when applying the guidelines on awarding contracts under these programs. In March 2008, a proposal to amend the FAR was published with the purpose of ensuring that the FAR clearly reflects SBA’s interpretation of the Small Business Act and SBA’s interpretation of its regulations about the order of precedence that applies when deciding whether to satisfy a requirement through award under these various types of small business programs. Among other things, the proposed rule is intended to make clear that there is no order of precedence among the 8(a), HUBZone, or service-disabled veteran-owned small business programs. The proposed rule stated that SBA believes that, among other factors, progress in fulfilling the various small business goals should be considered in making a decision as to which program is to be used for an acquisition. Federal contracting officials from the four agencies also explained that it was sometimes difficult to identify HUBZone firms with the required expertise to fulfill contracts. For example, DHS acquisition officials stated that market research that their contracting officers conducted sometimes indicated there were no qualified HUBZone firms in industries in which DHS awarded contracts. Specifically, a contracting officer in the U.S. Coast Guard’s Maintenance and Logistics Command explained that for contracts requiring specialized types of ship-repair work, the Coast Guard sometimes could not find sufficient numbers of HUBZone firms with the capacity and expertise to perform the work in the time frame required. SSA officials also stated that the agency awards most of its contracts to firms in the information technology industry and that contracting officers at times have had difficulty finding qualified HUBZone firms operating in this industry due to the amount of infrastructure and technical expertise required. Officials representing the Defense Threat Reduction Agency (an agency within DOD) also stated they often have difficulty finding qualified HUBZone firms that can fulfill their specialized technology needs. Lastly, Commerce officials explained that a review of the top 25 North American Industry Classification System (NAICS) codes under which the agency awarded contracts in fiscal year 2007 showed that fewer than 100 HUBZone firms operated in 13 of these 25 industries, including 5 industries that had fewer than 5 firms operating. They noted that these small numbers increased the difficulty of locating qualified HUBZone firms capable of meeting Commerce’s requirements. We did not validate the statements made by these federal contracting officials related to the difficulty they face in awarding contracts to HUBZone firms. Finally, according to contracting officers we interviewed, the availability of sole-source contracting under SBA’s 8(a) program could make the 8(a) program more appealing than the HUBZone program. Through sole-source contracting, contracting officers have more flexibility in awarding contracts directly to an 8(a) firm without competition. According to U.S. Coast Guard contracting officers we interviewed, this can save 1 to 2 months when trying to award a contract. Sole-source contracts are available to HUBZone program participants but only when the contracting officer does not have a reasonable expectation that two or more qualified HUBZone firms will submit offers. Contracting officers we interviewed regarding HUBZone sole-source contracts stated that this is rarely the case. In fiscal year 2006, $5.8 billion (about 44 percent) of all dollars obligated to small business 8(a) firms were awarded through 8(a) sole- source contracts. In contrast, about 1 percent of the contracts awarded to HUBZone firms were HUBZone sole-source contracts. Because agencies can count contracting dollars awarded to small businesses under more than one socioeconomic subcategory, it can be difficult to identify how many contract dollars firms received based on a particular designation. Small businesses can qualify for contracts under multiple socioeconomic programs. For example, if a HUBZone certified firm was owned by a service-disabled veteran, it could qualify for contracts set aside for HUBZone firms, as well as for contracts set aside for service-disabled veteran-owned businesses. The contracting dollars awarded to this firm would count toward both of these programs’ contracting goals. We reviewed FPDS-NG data on contracts awarded to HUBZone firms in fiscal year 2006. We found that approximately 45 percent of contracts awarded to HUBZone firms were not set aside for any particular socioeconomic program (see fig. 8). The next largest percentage, about 23 percent, were 8(a) sole-source contracts awarded to HUBZone firms that also participated in SBA’s 8(a) business development program. These firms did not have any competitors for the contracts awarded. HUBZone set-aside contracts, or contracts for which only HUBZone firms can compete, accounted for about 11 percent of the dollars awarded to HUBZone firms. This ability to count contracts toward multiple socioeconomic goals makes it difficult to determine how HUBZone certification may have played a role in winning a contract, especially when considering the limited amount of contract dollars awarded to HUBZone firms relative to the HUBZone goal. It can also make it more difficult to isolate the effect of HUBZone program status on economic conditions in a community. The map contained on the HUBZone Web site is the primary means of disseminating HUBZone information. The map offers small businesses an easy and readily accessible way of determining whether they can apply for HUBZone certification. However, those positive attributes have been undermined because the map reflects inaccurate and out-of-date information. In particular, as of May 2008, SBA’s HUBZone map included 50 ineligible areas and excluded 27 eligible areas. As a result, ineligible small businesses have been able to participate in the program, while eligible businesses have not been able to participate. By working with its contractors to eliminate inaccuracies and more frequently updating the map, SBA will help ensure that only eligible firms have opportunities to participate in the program. Although SBA relies on federal law to identify HUBZone areas, statutory changes over time have resulted in more areas being eligible for the program. Specifically, revisions to the statutory definition of HUBZone areas since 1999 have nearly doubled the number of areas and created areas that can be characterized as less economically distressed than areas designated under the original statutory criteria. While establishing new HUBZone areas could provide economic benefits to these new areas, as the program becomes less targeted and contracting dollars more dispersed, the program could have less of an effect on individual HUBZone areas. Such an expansion could diffuse the benefits that could be derived by steering businesses to economically distressed areas. Given the potential for erosion of the intended economic benefits of the program, further assessment of the criteria used to determine eligible HUBZone areas, in relation to overall program outcomes, may be warranted. The mechanisms that SBA uses to certify and monitor firms provide limited assurance that only eligible firms participate in the program. SBA does not currently have guidance on precisely when HUBZone program staff should request documentation from firms to support the information reported on their application, and it verifies information reported by firms at application or during recertification in limited instances. Also, SBA does not follow its policy of recertifying all firms every 3 years. Further, SBA lacks a formal policy on how quickly it needs to make a final determination on decertifying firms that may no longer be eligible for the program. From fiscal years 2004 through 2007, SBA failed to resolve proposed decertifications within its informal goal of 60 days for more than 3,200 firms. More routinely obtaining supporting documentation upon application and conducting more frequent site visits would represent a more efficient and consistent use of SBA’s limited resources. It could help ensure that firms applying for application are truly eligible, thereby reducing the need to spend a substantial amount of resources during any decertification process. In addition, an SBA effort to consistently follow its current policy of recertifying firms every 3 years, and to formalize and adhere to a specific time frame for decertifying firms, would help prevent ineligible firms from obtaining HUBZone contracts. By not evaluating the HUBZone program’s benefits, SBA lacks key information that could help it better manage the program and inform Congress of its results. SBA has some measures to assess program performance, but they are not linked to the program’s mission and thus do not measure the program’s effect on the communities in which HUBZone firms are located. While SBA identified several strategies for assessing the program’s effect and conducted limited surveys, it has not devoted resources to conduct a comprehensive program evaluation of the program’s effect on communities. We recognize the challenges associated with evaluating the economic effect of the program, such as isolating the role that HUBZone certification plays in obtaining federal contracts and generating benefits for communities. Because contract dollars awarded to firms in one small business program also could represent part of the dollars awarded in other programs, contract dollars awarded to HUBZone firms at best represent a broad indicator of program influence on a community’s economic activity. In addition, the varying levels of economic distress among HUBZone areas can further complicate such an evaluation. Despite these challenges, completing an evaluation would offer several benefits to the agency and the HUBZone program, including determining how well it is working across various communities, especially those that suffer most from economic distress. Such an evaluation is particularly critical in light of the expansion in the number of HUBZone areas, the potential for erosion of the intended economic benefits of the program from such expansion, and the wide variation in the economic characteristics of these areas. To improve SBA’s administration and oversight of the HUBZone program, we recommend that the Administrator of SBA take the following actions: Take immediate steps to correct and update the map that is used to identify HUBZone areas and implement procedures to ensure that the map is updated with the most recently available data on a more frequent basis. Develop and implement guidance to more routinely and consistently obtain supporting documentation upon application and conduct more frequent site visits, as appropriate, to ensure that firms applying for certification are eligible. Establish a specific time frame for eliminating the backlog of recertifications and ensure that this goal is met, using either SBA or contract staff, and take the necessary steps to ensure that recertifications are completed in a more timely fashion in the future. Formalize and adhere to a specific time frame for processing firms proposed for decertification in the future. Further develop measures and implement plans to assess the effectiveness of the HUBZone program that take into account factors such as (1) the economic characteristics of the HUBZone area and (2) contracts being counted under multiple socioeconomic subcategories. We requested SBA’s comments on a draft of this report, and the Associate Administrator for Government Contracting, and Business Development provided written comments that are presented in appendix II. SBA agreed with our recommendations and outlined steps that it plans to take to address each recommendation. First, SBA stated that it recognizes the valid concerns we raised concerning the HUBZone map and noted that efforts are under way to improve the data and procedures used to produce this important tool. Specifically, SBA plans to issue a new contract to administer the HUBZone map and anticipates that the maps will be updated and available no later than August 29, 2008. Further, SBA stated that, during the process of issuing the new contract, the HUBZone program would issue new internal procedures to ensure that the map is continually updated. Second, SBA stated that it appreciates our concern about the need to obtain supporting documents in a more consistent manner. In line with its efforts to formalize HUBZone processes, the agency noted that it was formulating procedures that would provide sharper guidance as to when supporting documentation and site visits would be required. Specifically, SBA plans to identify potential areas of concern during certification that would mandate additional documentation and site visits. Third, SBA noted that the HUBZone program had obtained additional staff to work through the backlog of pending recertifications and stated that this effort would be completed by September 30, 2008. Further, to ensure that recertifications will be handled in a more timely manner, SBA stated that the HUBZone program has made dedicated staffing changes and will issue explicit changes to procedures. Fourth, SBA stated that it is aware of the need to improve the effectiveness and consistency of the decertification process. SBA noted that it would issue new procedures to clarify and formalize the decertification process and its timelines. Among other things, SBA stated that the new decertification procedure would establish a 60-day deadline to complete any proposed decertification. Finally, SBA acknowledged that using HUBZone performance measures in a more systematized way to evaluate the program’s effectiveness would be beneficial and would provide important new information to improve and focus the HUBZone program. Therefore, SBA stated that it would develop an assessment tool to measure the economic benefits that accrue to areas in the HUBZone program and that the HUBZone program would then issue periodic reports accompanied by the underlying data. We also provided copies of the draft report to Commerce, DOD, DHS, and SSA. All four agencies responded that they had no comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Ranking Member, House Committee on Small Business, other interested congressional committees, and the Administrator of the Small Business Administration. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. To review the Small Business Administration’s (SBA) administration and oversight of the HUBZone program, we examined (1) the criteria and process that SBA uses to identify and map HUBZone areas and the economic characteristics of such areas; (2) the mechanisms that SBA uses to ensure that only eligible small businesses participate in the HUBZone program; and (3) the actions SBA has taken to assess the results of the program and the extent to which federal agencies have met their HUBZone contracting goals. To identify the criteria that SBA uses to identify HUBZone areas, we reviewed applicable statutes, regulations, and agency documents. Because the HUBZone program also uses statutory definitions from the Department of Housing and Urban Development’s (HUD) low-income-housing tax credit program, we reviewed the statutes and regulations underlying the definitions of a qualified census tract and difficult development area. To determine the process that SBA uses to identify HUBZone areas, we interviewed SBA officials and the contractor that developed and maintains the HUBZone map on SBA’s Web site. We also reviewed the policies and procedures the contractor follows when mapping HUBZone areas. Using historical data provided by SBA’s mapping contractor, we determined how the number of HUBZone areas has changed over time. We also used these historical data to determine if SBA had complied with its policy of asking the contractor to update the map every time the HUBZone area definition changed or new data used to designate HUBZone areas (for example, HUD’s lists of difficult development areas and unemployment data from the Bureau of Labor Statistics or BLS) became available. To assess the accuracy of the current HUBZone map, we compared the difficult development areas on the map with the statutory definition of a difficult development area. We also compared HUD’s 2008 list of qualified census tracts to the areas designated on the map and analyzed 2007 unemployment data from BLS (the most recent available) to determine if all of the nonmetropolitan counties that met the HUBZone eligibility criteria were on the map. Once we identified the current HUBZone areas, we used 2000 census data (the most complete data set available) to examine the economic characteristics of these areas. The 2000 census data are sample estimates and are, therefore, subject to sampling error. To test the impact of these errors on the classification of HUBZone areas, we simulated the potential results by allowing the estimated value to change within the sampling error distribution of the estimate and then reclassified the results. As a result of these simulations, we determined that the sampling error of the estimates had no material impact on our findings. For metropolitan and nonmetropolitan-qualified census tracts, nonmetropolitan counties, and difficult development areas in the 50 states and District of Columbia, we looked at common indicators of economic distress—poverty rate, unemployment rate, median household income, and median housing value. In measuring median household income and median housing value, we compared each HUBZone with the metropolitan area (for metropolitan-qualified census tracts) in which it was located or with the state nonmetropolitan area (for nonmetropolitan-qualified census tracts, nonmetropolitan counties, and difficult development areas) to put the values into perspective. We limited our analysis of Indian Country to poverty and unemployment rates because Indian lands vary in nature; therefore, no one unit of comparison worked for all areas when reporting median housing income and median housing value. We could not examine the economic characteristics of base closure areas because they do not coincide with areas for which census data are collected. To further examine the economic characteristics of qualified HUBZone areas, we analyzed the effect of hypothetical changes to the economic criteria used to designate qualified census tracts and nonmetropolitan counties. (We report the results of this analysis in app. III.) First, we adjusted the economic criteria used to designate qualified census tracts: (1) a poverty rate of at least 25 percent or (2) 50 percent or more of the households with incomes below 60 percent of each area’s median gross income. Second, we adjusted the criteria used to designate nonmetropolitan counties: (1) a median household income of less than 80 percent of the median household income for the state nonmetropolitan area or (2) an unemployment rate not less than 140 percent of the state or national unemployment rate (whichever is lower). In both cases, we made the criteria more stringent as well as less stringent. We assessed the reliability of the census and BLS data we used to determine the economic characteristics of HUBZone areas by reviewing information about the data and performing electronic data testing to detect errors in completeness and reasonableness. We determined that the data were sufficiently reliable for the purposes of this report. To determine how SBA ensures that only eligible small businesses participate in the HUBZone program, we reviewed policies and procedures established by SBA for certifying and monitoring HUBZone firms and internal control standards for federal agencies. We also interviewed SBA headquarters and field officials regarding the steps they take to certify and monitor HUBZone firms. We then assessed the actions that SBA takes to help ensure that only eligible firms participate against its policies and procedures and selected internal controls. In examining such compliance, we analyzed data downloaded from the HUBZone Certification Tracking System (the information system used to manage the HUBZone program) as of January 22, 2008, to determine the extent of SBA monitoring. Specifically, we analyzed the data to determine (1) the number of applications submitted in fiscal years 2000 through 2007 and their resolution; (2) the number of recertifications that SBA performed in fiscal years 2005 through 2007 and their results; (3) the number of recertifications conducted of HUBZone firms based on the number of years firms had been in the program; (4) the number of program examinations that SBA performed in fiscal years 2004 through 2007 and their results; (5) the number of program examinations conducted of HUBZone firms based on the number of years firms had been in the program; and (6) the number of firms proposed for decertification in fiscal years 2004 through 2007. We also analyzed Federal Procurement Data System-Next Generation (FPDS-NG) data to determine the extent to which firms that had been proposed for decertification or had actually been decertified had obtained federal contracts. Because the HUBZone Certification Tracking System does not readily provide information on the extent to which SBA requests documentation from firms or conducts site visits during certification and monitoring, we conducted reviews of all 125 applications, 15 recertifications, and 11 program examinations begun in September 2007 and completed by January 22, 2008 (the date of the data set). For applications, we selected those that were logged into the system in September 2007. For recertifications and program examinations, we selected those cases where the firm had acknowledged receipt of the notice that they had been selected for review in September 2007; we chose September 2007 because most of the cases had been processed by January 22, 2008. Further, we analyzed (1) FPDS-NG data for fiscal year 2006 (the most recent year available at the time of our analysis) and (2) Dynamic Small Business Source System (DSBSS) data as of December 12, 2007, to identify select characteristics of businesses that participated in the program. DSBSS contains information on firms that have registered in the Central Contractor Registration system (a database that contains information on all potential federal contractors) as small businesses. We assessed the reliability of the HUBZone Certification Tracking System, FPDS-NG, and DSBSS data we used by reviewing information about the data and performing electronic data testing to detect errors in completeness and reasonableness. We determined that the data were sufficiently reliable for the purposes of this report. To determine the measures that SBA has in place to assess the results of the HUBZone program, we reviewed SBA’s performance reports and other agency documents. We then compared SBA’s performance measures for the HUBZone program to our guidance on the attributes of effective performance measures. To determine the extent to which federal agencies have met their contracting goals, we (1) analyzed data from FPDS-NG and (2) reviewed SBA reports on agency contracting goals and accomplishments, such as federal contracting dollars awarded by agency for the various small business programs, for fiscal years 2003 through 2006. We also reviewed Federal Acquisition Regulation and SBA guidance and other relevant documentation. In addition, we interviewed small business and contracting officials at a nongeneralizable sample of agencies (the Departments of Commerce, Defense, Homeland Security and the Social Security Administration) to determine what factors affect federal agencies’ ability to meet HUBZone contracting goals. We selected agencies that received a range of scores as reported in SBA’s fiscal year 2006 Small Business Procurement Scorecard and awarded varying amounts of contracts to HUBZone firms. To explore benefits that the program may have generated for selected firms and communities, we visited a nongeneralizable sample of four HUBZone areas: Lawton, Oklahoma; Lowndes County, Georgia; and Long Beach and Los Angeles, California. In selecting these areas, we considered geographic dispersion, the type of HUBZone area, and the dollar amount of contracts awarded to HUBZone firms. During each site visit, we interviewed officials from the SBA district office, the Chamber of Commerce, a small business development center, and certified HUBZone firms, with the exception of the city of Long Beach, where we did not meet with the Chamber of Commerce. We conducted this performance audit from August 2007 to June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In this appendix, we provide information on the economic characteristics of three types of HUBZone areas: (1) qualified census tracts, which have 50 percent or more of their households with incomes below 60 percent of the area median gross income or have a poverty rate of at least 25 percent and cannot contain more than 20 percent of the area population; (2) qualified Indian reservations, which include lands covered by a federal statutory definition of “Indian Country;” and (3) qualified nonmetropolitan counties, or those having a median household income of less than 80 percent of the median household income for the state nonmetropolitan area or an unemployment rate that is not less than 140 percent of the state average unemployment rate or the national average unemployment rate (whichever is lower). Other types of HUBZone areas are base closure areas and difficult development areas. First, we report economic data for those HUBZone areas that are nonmetropolitan-qualified census tracts and Indian Country areas. Second, to further illustrate the economic diversity among qualified HUBZone areas, we provide data on the effect of hypothetical changes to the economic criteria used to designate metropolitan-qualified census tracts and nonmetropolitan counties. Based on poverty rates, nonmetropolitan-qualified census tracts appear to be as economically distressed as metropolitan-qualified census tracts. About 99 percent of nonmetropolitan census tracts (excluding redesignated areas, which no longer meet the economic criteria but by statute remain eligible until after the release of the 2010 decennial census data) had a poverty rate of 20 percent or more (see fig. 9). Similarly, about 93 percent of metropolitan census tracts (excluding redesignated areas) met this criterion. However, there are some differences between the economic characteristics of nonmetropolitan- and metropolitan-qualified census tracts. For example, 402 of the 1,272 nonmetropolitan census tracts (about 32 percent) had housing values that were less than 60 percent of the area median housing value, while 57 percent of metropolitan census tracts had housing values that met this criterion. Overall, we found that qualified Indian Country areas tend to be economically distressed (see fig. 10). For example, 310 of the 651 Indian Country areas (about 48 percent) had poverty rates of 20 percent or more. In addition, Indian Country areas had much higher rates of unemployment than any other type of HUBZone area. For example, 160 Indian Country areas (about 25 percent) had unemployment rates of 20 percent or more. In contrast, metropolitan census tracts and nonmetropolitan counties (excluding redesignated areas) had unemployment rates that met this same criterion of about 18 percent and just less than 2 percent, respectively. As discussed above, qualified HUBZone areas are economically diverse; therefore, adjustments to the qualifying criteria could affect the number and type of eligible areas. Qualified census tracts must meet at least one of two economic criteria: (1) have a poverty rate of at least 25 percent or (2) be an area in which 50 percent or more of the households have incomes below 60 percent of the area’s median gross income. By using a poverty rate of 10 percent or more for metropolitan census tracts, however, 14,258 additional metropolitan census tracts could be eligible for the program (an increase of about 143 percent), depending on whether they met the other eligibility requirements (see table 7). In contrast, by using a poverty rate of 40 percent or more for metropolitan census tracts, the number of metropolitan census tracts (those tracts that currently meet eligibility criteria and those that are redesignated) could decrease from 9,959 to 2,270 (a decrease of about 77 percent). Qualified nonmetropolitan counties are also determined by two economic criteria: (1) a median household income of less than 80 percent of the median household income for the state nonmetropolitan area or (2) an unemployment rate not less than 140 percent of the state or national unemployment rate (whichever is lower). By using a county median household income of less than 90 percent of the median household income for the state nonmetropolitan area, 29 additional nonmetropolitan counties could be eligible for the program (see table 8). By using a county median household income of less than 70 percent of the median household income for the state nonmetropolitan area, the number of eligible HUBZone- qualified nonmetropolitan counties could decrease from 1,162 to 43 (about 96 percent). To examine the characteristics of HUBZone firms, we analyzed data from SBA’s Dynamic Small Business Source System (DSBSS) as of December 12, 2007. DSBSS contains information on firms that have registered as small businesses in the Central Contractor Registration system (a database that contains information on all potential federal contractors). With the exception of information on the firms’ HUBZone, 8(a), and Small Disadvantaged Business certifications, the data in the system are self- reported. We found that HUBZone firms vary in size, ownership, types of services and products provided, and additional small business designations leveraged. Specifically, our analysis showed the following: The size of HUBZone firms varies. We chose two measures to describe the size of HUBZone firms—number of employees and average gross revenue. The average number of staff at HUBZone firms was 24. However, half of HUBZone firms had 6 or fewer employees. The average gross revenue for HUBZone firms was almost $3.5 million per year. However, half of HUBZone firms earned $600,000 or less annually. Ownership status is diverse. Approximately 30 percent of HUBZone firm owners were women, while 37 percent were minorities. Table 9 breaks out the owners of HUBZone firms based on race and ethnicity. HUBZone firms operate in a variety of industries as defined by North American Industry Classification System (NAICS) codes, and many operate in multiple industries. Table 10 lists the top 10 industries in which HUBZone firms operated and the number of HUBZone firms that provided a service or product related to that industry. HUBZone firms often have other small business designations. Although the majority of HUBZone firms had only the HUBZone designation, 32 percent had one additional designation, which was most often the service-disabled, veteran-owned designation. Table 11 shows the extent to which HUBZone firms had other small business designations. We conducted site visits to four HUBZone areas—Lawton, Oklahoma; Lowndes County, Georgia; and Long Beach and Los Angeles, California— to better understand to what extent benefits have been generated by the HUBZone program. These four areas represent various types of HUBZone areas (see table 12), and we found that the perceived benefits of the HUBZone program varied across these locations. The majority of the individuals we interviewed indicated that their firms had received some benefit from HUBZone certification. In most cases, they cited as a benefit the ability to compete for and win contracts, which in some cases had allowed firms to expand or become more competitive. However, representatives of a few firms indicated they had not been able to win any contracts through the program, which made it difficult to realize any benefits. We also asked local economic development and Chamber of Commerce officials if they were familiar with the HUBZone program. We found varying levels of familiarity with the program, and some officials representing economic development entities stated they lacked information on the program’s effect that could help them inform small businesses of its potential benefits. Various representatives of HUBZone firms with whom we spoke stated that the HUBZone program provided advantages. The majority of representatives of HUBZone firms we interviewed stated that HUBZone certification had provided them with an additional opportunity to bid on federally funded contracts. Additionally, some of the business owners we interviewed who had received contracts stated that winning contracts through the HUBZone program had allowed their firm to grow (for example, to hire employees or expand operations). Representatives from two HUBZone firms located in Lawton, Oklahoma, that had received contracts through their HUBZone certification stated that the primary benefits associated with their HUBZone certification had been winning contracts that allowed them to hire additional employees and continue to build a reputation for their firms, which in turn had placed them in a better position to compete for additional contracts. Representatives of a HUBZone firm located in Valdosta, Georgia, stated that they had utilized the HUBZone program to obtain more contracts for their construction firm. They added that the program had allowed their firm to enter the federal government contracting arena, which provided additional opportunities aside from private-sector construction contracts. Representatives from three HUBZone firms in Los Angeles stated that they had won contracts through the program and had been able to build a stronger reputation for their firms by completing those contracts. Representatives of two of these firms also stated that the contracts they won through the program had helped their firms to grow and hire additional employees. For example, representatives from one HUBZone firm we interviewed stated that the firm had hired 10 to 15 full-time employees partly as a result of obtaining HUBZone contracts. However, representatives of some HUBZone firms stated that the program has not generated any particular benefits for their firm. For example, representatives of two HUBZone firms in Lawton, Oklahoma, and one HUBZone firm in Valdosta, Georgia, stated that their HUBZone certification had resulted in no contracts or not enough contracts to provide opportunities to “grow” their firm. They noted that the HUBZone certification alone was not sufficient when competing for federally funded contracts, particularly because—based on their experience—few contracts were set aside for HUBZone firms. Our interviewees indicated that they planned to stay in the program but were unlikely to see any benefits unless additional contracts were set aside for HUBZone firms. A representative from one HUBZone firm located in Long Beach, California, stated that her HUBZone firm had not been awarded any contracts directly through the program, but because of the firm’s HUBZone status, it had been able to perform work as a subcontractor on contracts that had HUBZone subcontracting goals. However, her firm had not grown or expanded employment through the program. We also found that, while some local economic development and Chamber of Commerce officials with whom we spoke were familiar with the HUBZone program, others were not. For example, in Lawton, Oklahoma, local economic development and Chamber of Commerce officials were familiar with the program and its requirements, largely because the city of Lawton has been designated a HUBZone area. In Valdosta, Georgia, Chamber of Commerce officials and officials from various economic development authorities were not familiar with the program and its requirements, but the small business development center official we interviewed was familiar with the program. In Long Beach and Los Angeles, California, most of the small business development center and economic development officials with whom we met also were relatively unfamiliar with the program, its goals, and how small businesses could use the program. Finally, officials representing economic development entities in Lowndes County, Georgia, and Los Angeles, California, stated that they lacked information on the program’s impact that could help them inform small businesses of its potential benefits. In addition to the contact named above, Paige Smith (Assistant Director), Triana Bash, Tania Calhoun, Bruce Causseaux, Alison Gerry, Cindy Gilbert, Julia Kennon, Terence Lam, Tarek Mahmassani, John Mingus, Marc Molino, Barbara Roesmann, and Bill Woods made key contributions to this report.
The Small Business Administration's (SBA) Historically Underutilized Business Zone (HUBZone) program provides federal contracting assistance to small firms located in economically distressed areas, with the intent of stimulating economic development. Questions have been raised about whether the program is targeting the locations and businesses that Congress intended to assist. GAO was asked to examine (1) the criteria and process that SBA uses to identify and map HUBZone areas and the economic characteristics of such areas, (2) the mechanisms SBA uses to ensure that only eligible small businesses participate in the program, and (3) the actions SBA has taken to assess the results of the program and the extent to which federal agencies have met their HUBZone contracting goals. To address these objectives, GAO analyzed statutory provisions, as well as SBA, census, and contracting data, and interviewed SBA and other federal and local officials. SBA relies on federal law to identify qualified HUBZone areas based on provisions such as median income in census tracts, but the map it uses to publicize HUBZone areas is inaccurate, and the economic characteristics of designated areas vary widely. To help firms determine if they are located in a HUBZone area, SBA publishes a map on its Web site. However, the map contains areas that are not eligible for the program and excludes some eligible areas. As a result, ineligible small businesses have been able to participate in the program, and eligible businesses have not been able to participate. Revisions to the statutory definition of HUBZone areas (such as allowing continued inclusion of areas that ceased to be qualified) have nearly doubled the number of areas and created areas that are less economically distressed than areas designated under the original criteria. Such an expansion could diffuse the benefits to be derived from steering businesses to economically distressed areas. The mechanisms that SBA uses to certify and monitor firms provide limited assurance that only eligible firms participate in the program. Although internal control standards state that agencies should verify information they collect, SBA verifies the information reported by firms on their application or during recertification--its process for monitoring firms--in limited instances and does not follow its own policy of recertifying all firms every 3 years. GAO found that more than 4,600 firms that had been in the program for at least 3 years went unmonitored. Further, SBA lacks a formal policy on how quickly it needs to make a final determination on decertifying firms that may no longer be eligible for the program. Of the more than 3,600 firms proposed for decertification in fiscal years 2006 and 2007, more than 1,400 were not processed within 60 days--SBA's unwritten target. As a result of these weaknesses, there is an increased risk that ineligible firms have participated in the program and had opportunities to receive federal contracts based on their HUBZone certification. SBA has taken limited steps to assess the effectiveness of the HUBZone program, and from 2003 to 2006 federal agencies did not meet the government-wide contracting goal for the HUBZone program. While SBA has some measures to assess the results of the HUBZone program, they are not directly linked to the program's mission, and the agency has not implemented its plans to conduct an evaluation of the program based on variables tied to the program's goals. Consequently, SBA lacks key information to manage the program and assess performance. Contracting dollars awarded to HUBZone firms increased from fiscal year 2003 to 2006, but consistently fell short of the government-wide goal of awarding 3 percent of annual contracting dollars to HUBZone firms. According to contracting officials GAO interviewed, factors such as conflicting guidance on how to consider the various small business programs when awarding contracts and a lack of HUBZone firms in certain industries may have affected the ability of federal agencies to meet their HUBZone goals.
GAO has been assessing strategic sourcing and the potential value of applying these techniques to federal acquisitions for more than a decade. In 2002, GAO reported that leading companies of that time committed to a strategic approach to acquiring services—a process that moves a company away from numerous individual procurements to a broader aggregate approach—including developing knowledge of how much they were spending on services and taking an enterprise-wide approach to As a result, companies made structural changes services acquisition.with top leadership support, such as establishing commodity managers— responsible for purchasing services within a category—and were better able to leverage their buying power to achieve substantial savings. Strategic sourcing can encompass a range of tactics for acquiring products and services more effectively and efficiently. In addition to leveraged buying, tactics include managing demand by changing behavior, achieving efficiencies through standardization of the acquisition process, evaluating total cost of ownership, and better managing supplier relationships. We have particularly emphasized the importance of comprehensive spend analysis for efficient procurement since 2002. Spend analysis provides knowledge about how much is being spent for goods and services, who the buyers are, who the suppliers are, and where the opportunities are to save money and improve performance. Private sector companies are using spend analysis as a foundation for employing a strategic approach to procurement. We have previously reported that because procurement at federal departments and agencies is generally decentralized, the federal government is not fully leveraging its aggregate buying power to obtain the most advantageous terms and conditions for its procurements. Agencies act more like many unrelated, medium-sized businesses and often rely on hundreds of separate contracts for many commonly used items, with prices that vary widely. Recognizing the benefits of strategic sourcing, the Office of Management and Budget (OMB) issued a memorandum in 2005 that implemented strategic sourcing practices. Agencies were directed to develop and implement strategic sourcing efforts based on the results of spend analyses. In addition to individual agency efforts, a government-wide strategic sourcing program—known as the Federal Strategic Sourcing Initiative (FSSI)—was established in 2005. FSSI was created to address government-wide opportunities to strategically source commonly purchased products and services and eliminate duplication of efforts across agencies. The FSSI mission is to encourage agencies to aggregate requirements, streamline processes, and coordinate purchases of like products and services to leverage spending to the maximum extent possible. At the time of our 2012 report, four FSSI efforts were ongoing— focused on office supplies, domestic delivery of packages, telecommunications, and print management—and three were planned related to SmartBUY, Wireless plans and devices, and publication licenses. In our September 2012 report, we found that most of the agencies we reviewed leveraged a fraction of their buying power through strategic sourcing. More specifically, in fiscal year 2011, the Department of Defense (DOD), Department of Homeland Security (DHS), Department of Energy, and Department of Veterans Affairs (VA) accounted for 80 percent of the $537 billion in federal procurement spending, but reported managing about 5 percent of that spending, or $25.8 billion, through strategic sourcing efforts. Similarly, we found that the FSSI program had only managed a small amount of spending through its four government- wide strategic sourcing initiatives in fiscal year 2011, although it reported achieving significant savings on those efforts. Further, we found that most selected agencies’ efforts did not address their highest spending areas, such as services, which provides opportunities for significant savings. We found that when strategically sourced contracts were used, agencies generally reported achieving savings. For example, selected agencies generally reported savings ranging from 5 percent to over 20 percent of spending through strategically sourced contracts. In fiscal year 2011, DHS reported managing 20 percent of its spending and achieving savings of $324 million. At the government-wide level, the FSSI program reported managing $339 million through several government-wide initiatives in fiscal year 2011 and achieving $60 million in savings, or almost 18 percent of the procurement spending it managed through these initiatives. After strategic sourcing contracts are awarded, realizing cost savings and other benefits depends on utilization of these contracts. We found that only 15 percent of government-wide spending for the products and services covered by the FSSI program went through FSSI contracts in fiscal year 2011. Agencies cited a variety of reasons for not participating, such as wanting to maintain control over their contracting activities, or because the agency had unique requirements. FSSI use is not mandatory and agencies face no consequences for not using FSSI contract vehicles. There are a variety of impediments to strategic sourcing in the federal setting but several stood out prominently in our 2012 review.agencies faced challenges in obtaining and analyzing reliable and detailed data on spending as well as securing expertise, leadership support, and developing metrics. Data: Our reports have consistently found that the starting point for strategic sourcing efforts is having good data on current spending and yet this is the biggest stumbling block for agencies. A spending analysis reveals how much is spent each year, what was bought, from whom it was bought, and who was purchasing it. The analysis also identifies where numerous suppliers are providing similar goods and services—often at varying prices—and where purchasing costs can be reduced and performance improved by better leveraging buying power and reducing the number of suppliers to meet needs. The FSSI program and selected agencies generally cited the Federal Procurement Data System-Next Generation (FPDS-NG)—the federal government’s current system for tracking information on contracting actions—as their primary source of data, and noted numerous deficiencies with these data for the purposes of conducting strategic sourcing research. Agencies reported that when additional data sources are added, incompatible data and separate systems often presented problems. We have previously reported extensively on issues agencies faced in gathering data to form the basis for their spend analysis. However, some agencies have been able to make progress on conducting enterprise-wide opportunity analyses despite flaws in the available data. For example, both the FSSI Program Management Office and DHS told us that current data, although imperfect, provide sufficient information for them to begin to identify high spend opportunities. DHS has in fact evaluated the majority of its 10 highest-spend commodities and developed sourcing strategies for seven of those based on its analysis of primarily FPDS-NG data. Further, we have previously reported that the General Services Administration estimated federal agencies spent about $1.6 billion during fiscal year 2009 purchasing office supplies from more than GSA used available data on spending to support 239,000 vendors.development of the Office Supplies Second Generation FSSI, which focuses office supply spending to 15 strategically sourced contracts. Expertise: Officials at several agencies also noted that the lack of trained acquisition personnel made it difficult to conduct an opportunity analysis and develop an informed sourcing strategy. For example, Army officials cited a need for expertise in strategic sourcing and spend analysis data, and OMB officials echoed that a key challenge is the dearth of strategic sourcing expertise in government. VA and Energy also reported this challenge. A few agencies have responded to this challenge by developing training on strategic sourcing for acquisition personnel. For example, the Air Force noted that it instituted training related to strategic sourcing because it is necessary to have people who are very strong analytically to do the front-end work for strategic sourcing, and these are the hardest to find. The training course facilitates acquisition personnel in obtaining the strong analytical skills to perform steps like market evaluation. VA has also begun to develop training to address this challenge. Leadership commitment: We also found in 2012 that most of the agencies we reviewed were challenged by a lack of leadership commitment to strategic sourcing, although improvements were under way. We have reported that in the private sector, the support and commitment of senior management is viewed as essential to facilitating companies’ efforts to re-engineer their approaches to acquisition as well as to ensuring follow through with the strategic sourcing approach. However, we found in 2012 that leaders at some agencies were not dedicating the resources and providing the incentives that were necessary to build a strong foundation for strategic sourcing. Metrics: A lack of clear guidance on metrics for measuring success has also impacted the management of ongoing FSSI efforts as well as most selected agencies’ efforts. We found that agencies were challenged to produce utilization rates and other metrics—such as spending through strategic sourcing contracts and savings achieved— that could be used to monitor progress. Several agencies also mentioned a need for sustained leadership support and additional resources in order to more effectively monitor their ongoing initiatives. Agency officials also mentioned several disincentives that can discourage procurement and program officials from proactively participating in strategic sourcing, and at many agencies, these disincentives have not been fully addressed by leadership. Key disincentives identified by agency officials include the following: A perception that reporting savings due to strategic sourcing could lead to program budgets being cut in subsequent years, Difficulty identifying existing strategic sourcing contracts that are available for use as there is no centralized source for this information, A perception that strategically sourced contract vehicles may limit the ability to customize requirements, A desire on the part of agency officials to maintain control of their Program officials’ and contracting officers’ relationships with existing The opportunity to get lower prices by going outside of strategically sourced contracts. Leaders at some agencies have proactively introduced practices that address these disincentives to strategically source. For example, DHS and VA reported increasing personal incentives for key managers by adding strategic sourcing performance measures to certain executives’ performance evaluations. In addition, several agencies including DOD, DHS, and VA have instituted policies making use of some strategic sourcing contracts mandatory or mandatory “with exception,” although the extent to which these policies have increased use of strategic sourcing vehicles is not yet clear. Some agencies have made use of automated systems to direct spending through strategic sourcing contracts. For example, FSSI issued a blanket purchase agreement through its office supplies initiative that included provisions requiring FSSI prices to be automatically applied to purchases made with government purchase cards. VA reported that its utilization rate for the office supplies FSSI contracts increased from 12 percent to 90 percent after these measures took effect. In fiscal year 2012, the federal government obligated $307 billion to acquire services ranging from the management and operations of government facilities, to information technology services, to research and development. This represents over half of all government procurements. Making services procurement more efficient is particularly relevant given the current fiscal environment, as any savings from this area can help agencies mitigate the adverse effects of potential budget reductions on their mission. Moreover, our reports have shown that agencies have difficulty managing services acquisition and have purchased services inefficiently, which places them at risk of paying more than necessary. These inefficiencies can be attributed to several factors. First, agencies have had difficulty defining requirements for services, such as developing clear statements of work which can reduce the government’s risk of paying for more services than needed. Second, agencies have not always leveraged knowledge of contractor costs when selecting contract types. Third, agencies have missed opportunities to increase competition for services due to overly restrictive and complex requirements; a lack of access to proprietary, technical data; and supplier preferences. We found that strategic sourcing efforts addressed products significantly more often than services and that agencies were particularly reluctant to apply strategic sourcing to the purchases of services. For example, of the top spending categories that DOD components reported targeting through implemented strategic sourcing initiatives, only two are services. Officials reported that they have been reluctant to strategically source services for a variety of reasons, such as difficulty in standardizing requirements or a decision to focus on less complex commodities that can demonstrate success. Yet, like the commercial sector, federal agencies can be strategic about buying services. For example, DHS has implemented a strategic sourcing initiative for engineering and technical services, which is also in the top 10 spending categories for the Army, Air Force, and Navy. The reluctance of federal agencies to apply strategic sourcing to services stands in sharp contrast to leading companies. As described below, leading companies perceive services as prime candidates for strategic sourcing, though they tailor how they acquire these services based on complexity and availability. Given the trend of increased federal government spending on services and today’s constrained fiscal environment, this Committee asked that we identify practices used by large commercial organizations in purchasing services. We reported on the results of this review in April 2013. Like the federal government, leading companies have experienced growth in spending on services, and over the last 5 to 7 years, have been examining ways to better manage them. Officials from seven leading companies GAO spoke with reported saving 4 to15 percent over prior year spending through strategically sourcing the full range of services they buy, including services very similar to what the federal government buys: facilities management, engineering, and information technology, for example. Leading company practices suggest that it is critical to analyze all procurement spending with equal rigor and with no categories that are off limits. Achieving savings can require a departure from the status quo. Companies’ keen analysis of spending, coupled with central management and knowledge sharing about the services they buy, is key to their savings. Their analysis of spending patterns can be described as comprising two essential variables: the complexity of the service and the number of suppliers for that service. Knowing these variables for any given service, companies tailor their tactics to fit the situation; they do not treat all services the same. In our 2013 report, we highlighted quotes from company officials that illuminate what their approach to increasing procurement efficiency means to them (see table 1). Leading companies generally agreed that the following foundational principles are all important to achieving successful services acquisition outcomes: maintaining spend visibility, developing category strategies, focusing on total cost of ownership, and regularly reviewing strategies and tactics. Taken together, these principles enable companies to better identify and share information on spending and increase market knowledge about suppliers to gain situational awareness of their procurement environment. This awareness positions companies to make more informed contracting decisions. For example, in addition to leveraging knowledge about spending, leading companies centralize procurement decisions by aligning, prioritizing, and integrating procurement functions within the organization. The companies we spoke with overcame the challenge of having a decentralized approach to purchasing services, which had made it difficult to share knowledge internally or use consistent procurement tactics. Without a centralized procurement process, officials told us, companies ran the risk that different parts of the organization could be unwittingly buying the same item or service, thereby missing an opportunity to share knowledge of procurement tactics proven to reduce costs. Company officials noted that centralizing procurement does not necessarily refer to centralizing procurement activity, but to centralizing procurement knowledge. This is important because there is a perception in the federal community that strategic sourcing requires the creation of a large, monolithic buying organization. Companies also develop category-specific procurement strategies with stakeholder buy-in in order to use the most effective sourcing strategies for each category. Category-specific procurement strategies describe the most cost-effective sourcing vehicles and supplier selection criteria to be used for each category of service, depending on factors such as current and projected requirements, volume, cyclicality of demand, risk, the services that the market is able to provide, supplier base competition trends, the company’s relative buying power, and market price trends. Company officials told us that category strategies help them conduct their sourcing according to a proactive strategic plan and not just on a reactive, contract-by-contract basis. One company’s Chief Procurement Officer referred to the latter as a “three bids and a buy” mentality that can be very narrowly focused and result in missed opportunities such as not leveraging purchases across the enterprise or making decisions based only on short term requirements. Similarly, Boeing says it sometimes chooses to execute a short-term contract to buy time if market research shows a more competitive deal can be obtained later. In addition, companies focus on total cost of ownership—making a holistic purchase decision by considering factors other than price. This is also contrary to a perception that strategic sourcing can lose a focus on best value. For example, while Walmart may often award a contract to the lowest bidder, it takes other considerations into account—such as average invoice price, time spent on location, average time to complete a task, supplier diversity, and sustainability—when awarding contracts. Humana is developing internal rate cards for consulting services that would help the company evaluate contractors’ labor rates based on their skill level. Pfizer’s procurement organization monitors compliance with company processes and billing guidelines. The company considers its procurement professionals as essentially risk managers rather than contract managers because they need to consider what is best for the company and how to minimize total cost of ownership while maintaining flexibility. By following the foundational principles to improve knowledge about their procurement environment, companies are well positioned to choose procurement tactics tailored to each service. While companies emphasize the importance of observing the principles, including category strategies, they do not take a one-size-fits-all approach to individual service purchase decisions. Two factors—the degree of complexity of the service and the number of available suppliers—determine the choice of one of four general categories of procurement tactics appropriate for that service: leveraging scale, standardizing requirements, prequalifying suppliers, and understanding cost drivers. Figure 1 below shows how the two factors help companies categorize different services and select appropriate tactics. For commodity services with many suppliers, such as administrative support, facilities maintenance, and housekeeping, companies generally focus on leveraging scale and competition to lower cost. Typical tactics applicable to this quadrant of services include consolidating purchases across the organization; using fixed price contracts; developing procurement catalogs with pre-negotiated prices for some services; and varying bidding parameters such as volume and scale in order to find new ways to reduce costs. For commodity services with few suppliers, such as specialized logistics and utilities, companies focus on standardizing requirements. Typical tactics applicable to this quadrant of services include paring back requirements in order to bring them more in line with standard industry offerings, and developing new suppliers to maintain a competitive industrial base. For example, Walmart holds pre-bid conferences with suppliers such as those supplying store security for “Black Friday”—the major shopping event on the day after Thanksgiving—to discuss requirements and what suppliers can provide. Delphi makes an effort to maintain a competitive industrial base by dual-sourcing certain services in order to minimize future risk—a cost trade-off. For knowledge-based services with many suppliers, such as information technology, legal, and financial services, companies prequalify and prioritize suppliers to highlight the most competent and reasonable suppliers. Typical tactics applicable to this quadrant of services include prequalifying suppliers by skill level and labor hour rates; and tracking supplier performance over time in order to inform companies’ prioritization of suppliers based on efficiency. For example, Pfizer Legal Alliance was created to channel the majority of legal services to pre-selected firms. Delphi only awards contracts to companies on their Category Approved Supplier List. The list is approved by Delphi leadership and is reviewed annually. For knowledge-based services with few suppliers, such as engineering and management support and research and development services, companies aim to maximize value by better understanding and negotiating individual components that drive cost. Typical tactics applicable to this quadrant of services include negotiating better rates on the cost drivers for a given service; closely monitoring supplier performance against pre-defined standards; benchmarking supplier rates against industry averages in order to identify excess costs; and improving collaboration with suppliers (see table 2). Companies we reviewed are not content to remain limited by their environment; over the long term, they generally seek to reduce the complexity of requirements and bring additional suppliers into the mix in order to commoditize services and leverage competition. This dynamic, strategic approach has helped companies demonstrate annual, sustained savings. Companies generally aim to commoditize services over the long term as much as possible because, according to them, the level of complexity directly correlates with cost. Companies also aim to increase competition, whether by developing new suppliers or reducing requirements complexity, which could allow more suppliers to compete. In doing so, companies can leverage scale and competition to lower costs. OMB and other agencies have recently taken actions to expand the use of strategic sourcing. In September 2012, GAO recommended that the Secretary of Defense, the Secretary of Veterans Affairs, and the Director of OMB take a series of detailed steps to improve strategic sourcing efforts. More specifically, we recommended that DOD evaluate the need for additional guidance, resources, and strategies, and focus on DOD’s highest spending categories; VA evaluate strategic sourcing opportunities, set goals, and establish OMB issue updated government-wide guidance on calculating savings, establish metrics to measure progress toward goals, and identify spending categories most suitable for strategic sourcing. In commenting on the September 2012 report, DOD, VA, and OMB concurred with the recommendations and stated they would take action to adopt them. We reported in April 2013 that DOD and VA had not fully adopted a strategic sourcing approach but had actions under way. For example, at that time, DOD had developed a more comprehensive list of the department’s strategic sourcing efforts, was creating additional guidance that includes a process for regular review of proposed strategic sourcing initiatives, noted a more focused targeting of top procurement spending categories for supplies, equipment, and services, and was assessing the need for additional resources to support strategic sourcing efforts. VA reported that it had taken steps to better measure spending through strategic sourcing contracts and was in the process of reviewing business cases for new strategic sourcing initiatives, and adding resources to increase strategic sourcing efforts. In 2012, OMB released a Cross-Agency Priority Goal Statement, which called for agencies to strategically source at least two new products or services in both 2013 and 2014 that yielded at least 10 percent savings. At least one of these new initiatives is to target information technology commodities or services. In December 2012, OMB further directed certain agencies to reinforce senior leadership commitment by designating an official responsible for coordinating the agency’s strategic sourcing activities. In addition, OMB identified agencies that should take a leadership role on strategic sourcing. OMB called upon these agencies to lead government-wide strategic sourcing efforts by taking steps such as recommending management strategies for specific goods and services to ensure that the federal government receives the most favorable offer possible. OMB directed these agencies to promote strategic sourcing practices inside their agencies by taking actions including collecting data on procurement spending. In closing, current fiscal pressures and budgetary constraints have heightened the need for agencies to take full advantage of strategic sourcing. These practices drive efficiencies and yield benefits beyond savings, such as increased business knowledge and better supplier management. Government-wide strategic sourcing efforts have been initiated, and federal agencies have improved and expanded upon their use of strategic sourcing to achieve cost savings and other benefits. However, little progress has been made over the past decade and much more needs to be done to better incorporate strategic sourcing leading practices, increase the amount of spending through strategic sourcing, and direct more efforts at high spend categories, such as services. Companies have shown that it is possible to save money by strategically managing services. They have done so not just by consolidating purchases of simple, commodity-like services; they have devised strategies and tactics to manage sophisticated services. Companies have also shown that savings come over a wide base and that results can be achieved with leadership, shared data, and a focus on strategic categories that is dynamic rather than static. Strategic sourcing efforts to date have targeted a small fraction of federal procurement spending. As budgets decline, however, it is important that the cost culture in federal agencies change. Adopting leading practices can enable agencies to provide more for the same budget. Chairman Carper, Ranking Member Coburn, and Members of the Committee, this concludes my statement. I would be pleased to answer any questions at this time. For future questions about this statement, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include W. William Russell, Assistant Director; Peter Anderson; Leigh Ann Haydon; John Krump; Roxanna Sun; Molly Traci; Ann Marie Udale; Alyssa Weir; and Rebecca Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO has reported that the government is not fully leveraging its aggregate buying power. Strategic sourcing, a process that moves an organization away from numerous individual procurements to a broader aggregate approach, has allowed leading companies to achieve savings of 10 percent or more. A savings rate of 10 percent of total federal procurement spending would represent more than $50 billion annually. While strategic sourcing makes good sense and holds the potential to achieve significant savings, federal agencies have been slow to embrace it, even in a time of great fiscal pressure. This statement highlights GAO's recent findings related to the use of strategic sourcing across government, best practices leading companies are adopting to increase savings when acquiring services, and recent actions that could facilitate greater use of strategic sourcing. GAO's testimony is based largely on GAO's September 2012 report on strategic sourcing and GAO's April 2013 report on leading practices for acquiring services, as well as other GAO reports on contracting and acquisition. Most of the agencies GAO reviewed for its September 2012 report leveraged a fraction of their buying power. More specifically, in fiscal year 2011, the Departments of Defense (DOD), Homeland Security, Energy, and Veterans Affairs (VA) accounted for 80 percent of the $537 billion in federal procurement spending, but reported managing about 5 percent of that spending, or $25.8 billion, through strategic sourcing efforts. Similarly, GAO found that the Federal Strategic Sourcing Initiative had only managed a small amount of spending through its four government-wide strategic sourcing initiatives in fiscal year 2011, although it reported achieving significant savings on those efforts. Further, we found that most selected agencies' efforts did not address their highest spending areas, such as services, which may provide opportunities for significant savings. Companies' keen analysis of spending is key to their savings, coupled with central management and knowledge sharing about the services they buy. Their analysis of spending patterns comprises two essential variables: the complexity of the service and the number of suppliers for that service. Knowing these variables for any given service, companies tailor their tactics to fit the situation, and do not treat all services the same. Leading companies generally agreed that foundational principles--maintaining spend visibility, centralizing procurement, developing category strategies, focusing on total cost of ownership, and regularly reviewing strategies and tactics--are all important to achieving successful services acquisition outcomes. Taken together, these principles enable companies to better identify and share information on spending and increase market knowledge about suppliers to gain situational awareness of their procurement environment and make more informed contracting decisions. Like the federal government, leading companies have experienced growth in spending on services, and over the last 5 to 7 years have been examining ways to better manage spending. Officials from seven leading companies GAO spoke with reported saving 4 to 15 percent over prior year spending through strategically sourcing the full range of services they buy, including those very similar to what the federal government buys--for example, facilities management, engineering, and information technology. Agencies have not fully adopted a strategic sourcing approach but some have actions under way. For example, in April 2013, DOD was assessing the need for additional resources to support strategic sourcing efforts, and noted a more focused targeting of top procurement spending categories for supplies, equipment, and services. VA reported that it had taken steps to better measure spending through strategic sourcing contracts and was in the process of reviewing business cases for new strategic sourcing initiatives. In 2012, the Office of Management and Budget (OMB) released a Cross-Agency Priority Goal Statement, which called for agencies to strategically source at least two new products or services in both 2013 and 2014 that yield at least 10 percent savings. In December 2012, OMB further directed agencies to reinforce senior leadership commitment by designating an official responsible for coordinating the agency's strategic sourcing activities. In addition, OMB identified agencies that should take a leadership role on strategic sourcing. OMB directed these agencies to promote strategic sourcing practices inside their agencies by taking actions including collecting data on procurement spending. GAO is not making any new recommendations in this testimony. GAO has made recommendations to OMB, DOD, VA, and other agencies on key aspects of strategic sourcing and acquisition of products and services in the past. These recommendations addressed such matters as setting goals and establishing metrics. OMB and the agencies concurred with the recommendations, and are in the process of implementing them.
For more than 50 years, IRS has collected estimated taxes on income not subject to withholding, which is known as nonwage income. Nonwage income generally includes income from pensions, interest, self-employment, capital gains, dividends, and partnerships. In 1994, these six categories accounted for 91 percent of all nonwage income. Nonwage income has represented a growing proportion of total U.S. income in recent years, increasing from 16.7 percent to 23.3 percent between 1970 and 1994. The percentage of tax returns reporting only nonwage income has likewise increased from 10 percent to 14 percent over the same period. The ES process requires taxpayers to make four payments to IRS at regular intervals during the tax year. At the beginning of the tax year, taxpayers are to estimate their annual income, determine their potential tax liability, and compute their ES payments. For tax years beginning on January 1, ES payments are due on April 15, June 15, September 15, and January 15 of the following year. If taxpayers underpay their ES payments, they may be liable for ES penalties. Taxpayers who underpay can choose either to have IRS calculate any penalty they owe or to self-assess their own penalty using a Form 2210. Taxpayers using the form to self-assess their ES penalties can use either the short or regular method to calculate their ES penalty amounts. The ES penalty equals the interest on the underpaid amount for the number of days it is outstanding. The interest rate used to compute the penalty is based on the federal short-term interest rate, which is subject to change the first day of each quarter, that is, January 1, April 1, July 1, and October 1. IRS updates the Form 2210 annually to account for changes in the federal short-term interest rate. (See app. I for a copy of a Form 2210.) In 1994, about 4 million taxpayers self-assessed their ES penalties. We could not determine how many of them used the short or regular method because IRS does not collect the data necessary to make that determination. However, to provide a rough approximation of how many taxpayers used the regular method, we examined 100 tax returns from the 1994 Statistics of Income database—the latest information available. The 100 returns were selected to represent taxpayers who either paid ES penalties or were otherwise affected by the ES process. Of the 75 taxpayers who self-assessed their ES penalties, we found that over half used the regular method. Our objectives were to identify causes for the complexity in the ES penalty process and to determine whether changes could be made to simplify the process for taxpayers who use the regular method to calculate their ES penalties. To achieve our objectives, we reviewed IRS reports on the estimated tax process; analyzed the instructions for preparing the Form 2210 and discussed the ES process with IRS officials; and reviewed a random sample of 100 taxpayer returns subject to the ES process to obtain a rough approximation of how many taxpayers used the regular method. The sample, which was too small to reliably project to the universe, was extracted from IRS’ tax year 1994 Statistics of Income database. We did our audit work in Washington, D.C., and San Francisco between September 1997 and March 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Internal Revenue. The comments we received are in appendix II and are evaluated at the end of this letter. The Form 2210 includes three requirements that complicate the ES penalty process and result in additional calculations that need to be made by taxpayers who use the regular method. Simplifying the underpayment schedule, changing the effective dates of the ES penalty interest rates, and using a 365-day year for all ES penalty calculations would make the form easier to understand and reduce the number of ES penalty calculations taxpayers have to make. The changes to the underpayment schedule and the ES penalty calculations could be made administratively by IRS, while changing the effective dates of the ES penalty interest rates would require legislative action. These changes would have either little or no effect on ES penalty amounts. The Form 2210 underpayment schedule is used to determine taxpayers’ underpayments in each of the four ES payment periods. The underpayment calculation first requires taxpayers to apportion the required annual payment amount—the amount of taxes that should have been paid during the year—to each of the four ES payment periods. The apportioned amounts are the required ES installments—the taxes that should have been paid in each of the ES payment periods. Conceptually, the basis for the underpayment calculation is relatively straightforward. To determine the underpayment amount, taxpayers essentially compare the ES installment with the ES payments made and any withholding during the payment period. When the installment amount is greater than the combined amounts of ES payments and withholding, taxpayers have an underpayment and are liable for ES penalties. However, in actual practice, taxpayers must follow a more complicated procedure to calculate their underpayments. This procedure entails additional calculations to account for underpayment balances that may remain from previous payment periods. These additional calculations are necessary to comply with the definition of the term “underpayment” in section 6654(b)(1) of the Internal Revenue Code. The definition precludes existing underpayment balances from being used in underpayment calculations for succeeding ES payment periods. To illustrate the complexity of this procedure, a completed Form 2210 for a hypothetical taxpayer is shown in figure III.1 in appendix III. The underpayment schedule would be simplified by changing the form so as to allow taxpayers to carry forward underpayment balances to succeeding ES payment periods. To calculate their underpayments using this simplified approach, taxpayers would subtract the combined amount of their ES payment and withholding from the total of their current ES installment and their underpayment balance from the previous payment period. This change would reduce the number of calculations taxpayers have to make on the underpayment schedule but would, in some cases, make the computed underpayment amounts different. Figure III.2 in appendix III shows our revised underpayment schedule for a hypothetical taxpayer. To avoid a difference in the ES penalty amounts calculated using the different underpayment amounts, a corresponding change to the ES penalty underpayment period would be needed. The change would further simplify the process for taxpayers by eliminating additional ES penalty calculations, while ensuring that ES penalty amounts are not affected. Appendix IV shows a comparison of ES penalty calculations required under the current and simplified approaches. Although simplifying the Form 2210 underpayment schedule and changing the underpayment period in the ES penalty calculation schedule would reduce the number of calculations taxpayers need to make, it would require that taxpayers and tax preparers adjust to a revised Form 2210, which has not changed since 1986. However, we believe the recurring advantages of simplifying the form outweigh this one-time adjustment. To determine their ES penalties, taxpayers must calculate the interest on the underpayment amount for the number of days it was outstanding, that is, the number of days between when the taxpayer should have made the ES payment and the earlier of (1) when the payment was actually made, or (2) the 15th day of the 4th month following the close of the taxable year (April 15 for a taxpayer using a calendar-year basis). The interest rate used in the calculation is based on the federal short-term interest rate, which is subject to change on the first day of each quarter. Under the current approach, if interest rates change while an underpayment is outstanding, taxpayers are required to make separate calculations for the periods before and after the interest rate change. Typically, in such instances, the separate calculations are necessary to cover only 15-day periods because the applicable ES payment dates occur either 15 days immediately before or after the effective dates of interest rate changes. For example, the July 1 interest rate change occurs 15 days after the June 15 payment date. This use of different dates for ES payment dates and interest rate effective dates has the effect of increasing the number of penalty calculations taxpayers must make. Table 1 illustrates the calculations that would have to be made by a hypothetical taxpayer with underpayments in three ES payment periods. To compute the ES penalty for the June 15 underpayment, the taxpayer would need to make two calculations, numbers 1 and 2, because the April 1 interest rate was in effect during the first period the underpayment was outstanding and the July 1 interest rate was in effect for the second period the underpayment was outstanding. Similarly, three calculations, numbers 3, 4, and 5 would be necessary to compute the taxpayer’s ES penalty for the September 15 underpayment because three different interest rates were in effect during the period the underpayment was outstanding. Unlike the ES penalty calculations previously discussed, penalty calculation number 6 is different. For that penalty, only one calculation at the January 1 interest rate would be required for the entire period the underpayment was outstanding because of a special rule in section 6621(b)(2)(B) of the Internal Revenue Code. The special rule, in essence, changes the effective date of the April 1 interest rate change to April 15, thereby allowing IRS to use the January 1 interest rate for the 15-day period between April 1 and April 15. Expansion of the special rule to cover all ES payment dates and interest rate dates would reduce the number of calculations taxpayers would have to make. Specifically, aligning the July 1, October 1, and January 1 interest rate effective dates with the June 15, September 15, and January 15 ES payment dates, respectively, would eliminate the 15-day ES penalty calculations. Instead, taxpayers would make only one calculation at the interest rate, which becomes effective at the end of the affected 15-day period. To illustrate the effect of the change, we used the hypothetical taxpayer case shown in table 1. Under an expanded special rule, the taxpayer would be required to make only three calculations rather than six, as shown in table 2. Expanding the special rule would have little effect on ES penalty amounts, either increasing or decreasing them for the affected 15-day periods, depending on the direction of the interest rate change. For example, if the interest rates increased on July 1, the rate used to calculate ES penalties for June 15 underpayments would be the higher July 1 rate rather than the lower rate effective before July 1. Conversely, if the interest rates decreased on July 1, ES penalties would be based on the lower July 1 rate. Expressed in terms of the ES underpayment, a 1-percent change for the 15-day period increases or decreases an ES penalty by 0.04 percent of the underpayment amount, or $4.11 on a $10,000 underpayment. We analyzed the effect of expanding the special rule on actual ES penalty amounts for 32 taxpayers in our sample of 100 taxpayers subject to the ES process who used the regular method in 1994 and who had submitted Form 2210s with their tax returns. A comparison of the ES penalty amounts computed using the expanded special rule with the actual ES penalties showed that changes were relatively small. In 15 cases, the ES penalties did not change. In the 17 other cases, the changes ranged from an increase of $1 on a $49 penalty to an increase of $163 on a $8,573 penalty. In 24 cases, the use of the expanded rule reduced the number of penalty calculations taxpayers needed to make. In 1 case, the number of penalty calculations declined from 11 to 4. (See app. VI for more details on the comparison and the ES penalty calculation schedules required on the current and revised Form 2210 for a sample case.) The comparison reflects 1994 interest rates, which increased from 7 to 9 percent during the year. If interest rates had decreased, the ES penalty amounts would have similarly decreased. Over the 10-year period 1987 through 1996, changes in the interest rate used to compute ES penalties varied, increasing in 2 years, decreasing in 2 years, both increasing and decreasing in 4 years, and not changing in 2 years. (See app. V for a summary of interest rate changes over the 10-year period.) Under current IRS procedures, taxpayers who have outstanding underpayment balances that extend through the end of a leap year must make separate calculations to account for the change from a 366- to a 365-day year regardless of whether there is an interest rate change on January 1. For example, if a taxpayer has an underpayment on September 15 that extends through January 15, the taxpayer would have to make two calculations to compute the ES penalty. For the first calculation, which would cover the period September 15 through December 31, the taxpayer would need to use a 366-day year in the formula. For the second calculation, which would cover the period January 1 through January 15, the taxpayer would need to use a 365-day year in the formula. In years preceding a leap year, taxpayers would have to use a similar calculation method to account for the change from a 365- to a 366-day year. The current process could be simplified by allowing taxpayers to use a 365-day year for all ES penalty calculations, regardless of the year. If there were no interest rate change on January 1, this change would eliminate an extra calculation for taxpayers with outstanding underpayment balances extending either through the end of a leap year or the end of a year preceding a leap year. This change would increase ES penalty amounts by 0.3 percent during the period affected. For example, the ES penalty computed using the current method would be $2,557 on a $40,000 underpayment outstanding for 260 days at 9 percent. Using a 365-day year in the same calculation results in an ES penalty of $2,564. The dollar amount of the increase would vary depending on the underpayment amount and how long it was outstanding. Modifying the ES penalty calculation would require taxpayers and tax preparers to adjust to a revised Form 2210, which has not been changed since 1986. Making the adjustment would not be difficult because it would occur one time and would affect only the calculation of the ES penalty. The rules governing application of the ES penalty process would not be affected by the change. The effect on IRS would also be minimal, since they currently revise the Form 2210 annually to account for interest rate changes. The benefits of modifying the ES penalty calculation process would be recurring for taxpayers who use the regular method. Revising the underpayment schedule would make it easier to understand and would reduce the number of calculations taxpayers have to make on the schedule. Similarly, adjusting the effective dates for interest rates used to compute ES penalties and using a 365-day year to calculate all ES penalties could further reduce the number of calculations taxpayers would have to make. To help ensure compliance with the Internal Revenue Code and IRS administrative requirements, the Form 2210 requires taxpayers to perform numerous calculations to track individual underpayment amounts and to determine precise ES penalty amounts. In three instances, the additional calculations did not seem to be justified because they resulted in either little or no change in penalty amounts. Administrative changes and legislative action would be needed to reduce the number of calculations required on the Form 2210 and make it easier for taxpayers to complete. Although the changes would require taxpayers and tax preparers to adjust to using a new form, we believe the recurring advantages of the change would outweigh that one-time adjustment. To simplify the ES penalty process for taxpayers, we recommend that the Commissioner of Internal Revenue revise the Form 2210 underpayment schedule to allow taxpayers to track the accumulated underpayment amount rather than individual underpayment amounts and revise the Form 2210 ES penalty calculation schedule to allow taxpayers to use a 365-day year in all ES penalty calculations. To further simplify the ES penalty process for taxpayers, we recommend that Congress amend section 6621(b)(2)(B) of the Internal Revenue Code to include the periods June 15 through June 30, September 15 through September 30, and January 1 through January 15. We obtained written comments on a draft of this report from the Commissioner of Internal Revenue (see app. II). The Commissioner said that he generally agreed with all of our recommendations. The Commissioner commented, however, that IRS would not consider revising the Form 2210 underpayment schedule before legislative action is taken to amend section 6621(b)(2)(B) and expand the ES special rule to the new periods. IRS believes that expanding the special rule would provide the greatest relief to taxpayers and that revising the Form 2210 without incorporating that change would not be justified by the lesser benefits derived from revising the underpayment schedule. We concur with IRS’ position and believe that all changes should be made at the same time to minimize the adjustment required by taxpayers and tax preparers to the revised Form 2210. We are sending copies of this report to the Secretary of the Treasury, Commissioner of Internal Revenue, and other interested parties. We will make copies available to others upon request. The major contributors to this report are listed in appendix VII. If you have any questions, please contact me on (202) 512-9110. In figures III.1 and III.2, we compare section A, lines 21-29, of the current Form 2210 Underpayment Schedule with our revised section A for a hypothetical taxpayer. The hypothetical taxpayer had underpayments of $5,000 in each ES payment period, and made ES payments of $3,000 on April 15, $1,000 on June 15, $2,000 on July 30, $4,500 on September 15, and $9,500 on January 15. In tables IV.1 and IV.2, we compare the ES penalty calculations necessary using the instructions on the current Form 2210 with the ES penalty calculations necessary if the instructions for the calculations were revised in accordance with the revisions made to simplify the form’s underpayment schedule. Eight calculations are necessary under the current instructions to determine the total ES penalty. Three calculations for the $2,000 underpayment are needed to account for (1) the $1,000 payment on June 15, (2) the interest rate change on July 1, and (3) the $2,000 payment on July 30. Three calculations for the $5,000 underpayment are needed to account for (1) the interest rate change on July 1, (2) the $1,000 payment on July 30, and (3) the $4,500 payment on September 15. Two calculations for the $4,500 underpayment are needed to account for (1) the interest rate change on October 1, and (2) the $9,500 payment on January 15. The revised instructions affect underpayments outstanding through more than one ES payment period, and the difference involves the period used to calculate the ES penalty. The difference is illustrated in tables IV.1 and IV.2 by the calculations used to compute the ES penalty for the $2,000 underpayment. In table IV.1, the $2,000 underpayment is outstanding from April 15, the ES payment due date, to July 30, when the underpayment balance was paid. Under the revised instructions, the underpayment is outstanding from April 15 to June 15, the next ES payment due date. Under the revised instructions, there is a $1,000 underpayment balance outstanding on June 15 that is not paid until July 30. This $1,000 balance is accounted for by carrying it forward to the next ES payment period, as shown in appendix III. In figure III.2, the underpayment for June 15 is $6,000, while the underpayment for the same period in figure III.1 is $5,000. As a result, the ES penalty for the $1,000 balance is included in the ES penalty calculated for the $6,000 underpayment, shown in table IV.2. The number of calculations necessary for the June 15 and September 15 underpayments are the same under both methods because the underpayments are not outstanding through more than one ES payment period. However, calculation numbers 2 and 5 in table IV.2 could be eliminated by expanding the ES special rule (see pp. 6-9). Change in Interest Rate and New Interest Rate +1(10%) +1(11%) –1(10%) +1(11%) +1(12%) –1(11%) –1(10%) –1(9%) –1(8%) –1(7%) +1(8%) +1(9%) +1(10%) –1(9%) –1(8%) +1(9%) In table VI.1, we compare the ES penalty amounts computed for 32 sampled taxpayers using the current Form 2210 and the penalties computed using the expanded special rule. In tables VI.2 and VI.3, we compare the ES penalty schedule required on the current Form 2210 and on the revised Form 2210 for sample case number 13. Table VI.1: Comparison of ES Penalty Amounts Computed for 32 Sampled Taxpayers Using the Current Form 2210 and the Expanded Special Rule Percent of Form 2210 amount (continued) Percent of Form 2210 amount Table VI.2: ES Penalty Calculations Required for Sample Number 13 Using the Current Form 2210 (c) (d) April 16, 1994 - June 30, 1994 30. Number of days FROM the date shown above line 30 TO the date the amount on line 28 was paid or 6/30/94, whichever is sooner. July 1, 1994 - September 30, 1994 32. Number of days FROM the date shown above line 30 TO the date the amount on line 28 was paid or 6/30/94, whichever is sooner. On the current Form 2210, 11 calculations (results in bold) were necessary to compute the total ES penalty because the underpayments in columns (b) and (c) were outstanding through January 15 and April 15, respectively, and interest rates changed twice during that period. On the revised Form 2210, only four calculations would be necessary to compute the ES penalty because rather than tracking each underpayment until it is paid, the ES penalty is calculated for the underpayment at the end of each ES payment period. Any remaining underpayment balance is carried forward, and the ES penalty for the accumulated balance would be calculated at the end of the next ES payment period. Ralph Block, Assistant Director John Zugar, Senior Evaluator Susan Mak, Evaluator Gerhard Brostrom, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the: (1) Internal Revenue Code and the Internal Revenue Service (IRS) administrative requirements that cause some of the complexity associated with estimated tax (ES) penalty calculations; and (2) likely effects of corresponding changes to the requirements that would make it easier for taxpayers to calculate their ES penalties. GAO noted that: (1) to help ensure compliance with the Internal Revenue Code and IRS administrative requirements, form 2210 requires numerous calculations to track individual ES underpayments and to determine precise ES penalty amounts; (2) GAO identified three requirements where the additional calculations did not seem to be justified because they resulted in either little or no effect on ES penalty amounts; (3) the form 2210 underpayment schedule, which currently requires that taxpayers track each underpayment individually, results in a complicated procedure, involving numerous calculations, to comply with the definition of underpayment in the Code; (4) GAO found that if taxpayers were allowed to track the accumulated underpayment amounts rather than if individual amounts and if a corresponding change were made to the ES penalty underpayment period, taxpayers could reduce the number of calculations without affecting ES penalty amounts; (5) taxpayers currently have to make additional ES penalty calculations to account for three of the four 15-day periods between ES interest rate effective dates and ES payment dates; (6) if interest rates change, this requirement increases the number of calculations taxpayers must make but only increases or decreases the penalties by small amounts; (7) in 1986, Congress eliminated this requirement for the 15-day period between April 1 and April 15 by aligning the interest rate effective date with the ES payment date; (8) similar alignments for the other three 15-day periods during the year would eliminate the calculations taxpayers must make for the 15-day periods and have little effect on ES penalty amounts; (9) to account for leap years, taxpayers currently have to make additional ES penalty calculations when underpayment balances extend either through the end of the leap year or the end of the year preceding a leap year; and (10) GAO found that, if taxpayers were allowed to use a 365-day year in all ES penalty calculations, they could eliminate the additional calculations and the penalties for the periods affected would increase by a very small amount--only 0.3 percent.
The Coast Guard is the lead federal agency for maritime security within DHS. The Coast Guard is responsible for a variety of missions, including ensuring ports, waterways, and coastline security; conducting search and rescue missions; interdicting illicit drug shipments and illegal aliens; and enforcing fisheries laws. In 1996, in order to continue carrying out its responsibilities and operations, the Coast Guard initiated the Deepwater program to replace or upgrade its aging vessels, aircraft, and other essential equipment. As originally conceived, Deepwater was designed around producing aircraft and vessels that would function in the Coast Guard’s traditional at- sea roles—such as interdicting illicit drug shipments or rescuing mariners from difficulty at sea—and the original 2002 Deepwater program was focused on those traditional missions. After the terrorist attacks on September 11, 2001, the Coast Guard was also assigned homeland security missions related to protection of ports, waterways, and coastal areas. Based on its revised mission responsibilities, the Coast Guard updated its Deepwater Acquisition Program Baseline in November 2005. The new baseline contained changes in the balance between new assets to be acquired and legacy assets to be upgraded and adjusted the delivery schedule and costs for many of these assets. Overall, the Deepwater acquisition schedule was lengthened by 5 years, with the final assets now scheduled for delivery in 2027. Upon its completion, the Deepwater program is to consist of 5 new classes of vessels, 1 new class of fixed-wing aircraft, 1 new class of unmanned aerial vehicles, 2 classes of upgraded helicopters, and 1 class of upgraded fixed-wing aircraft. The 215 new vessels consist of five new asset classes—the National Security Cutter (NSC), Offshore Patrol Cutter (OPC), Fast Response Cutter (FRC), Long-Range Interceptor (LRI), and Short-Range Prosecutor (SRP). The 240 aircraft are composed of two new aircraft classes, the Vertical Unmanned Aerial Vehicle (VUAV) and the Maritime Patrol Aircraft (MPA); and three upgraded asset classes—the Long-Range Surveillance Aircraft (LRS), Medium-Range Recovery Helicopter (MRR), and the Multi-Mission Cutter Helicopter (MCH). Table 1 provides an overview, by asset class, of the Deepwater vessels to be acquired and table 2 provides an overview of the Deepwater aircraft to be acquired or upgraded. As noted in Table 1, the 140-foot FRC was designated as a replacement vessel for the 110-foot and 123-foot patrol boats. Since 2001, we have reviewed the Deepwater program and have informed Congress, DHS, and Coast Guard of the problems, risks, and uncertainties inherent with such a large acquisition that relies on a system integrator to identify the assets needed and then using tiers of subcontractors to design and build the assets. In March 2004, we made recommendations to the Coast Guard to address three broad areas of concern: improving program management, strengthening contractor accountability, and promoting cost control through greater competition among potential subcontractors (see table 3). We have issued a number of follow-on reports describing efforts the Coast Guard has taken to address these recommendations. (See app. I for a list of related GAO products.) Between January 2001 and November 2006, numerous events led up to the failure of the Coast Guard’s bridging strategy to convert the legacy 110-foot patrol boats into 123-foot patrol boats. In January 2001, an independent study found that the 110-foot patrol boats based in south Florida and Puerto Rico were experiencing severe hull corrosion and that their structural integrity was deteriorating rapidly. To address these issues, the Coast Guard’s original (2002) Deepwater plan included a strategy to convert all 49 of the 110-foot patrol boats into 123-foot patrol boats to strengthen the hulls. Also, the plan was to provide additional capabilities, such as stern launch and recovery capabilities and enhanced and improved command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR). While Coast Guard originally planned to convert all 49 of its 110-foot patrol boats to 123-foot patrol boats, it halted the patrol boat conversion program after 8 boats because of continued hull buckling and the inability of these converted patrol boats to meet post-September 11, 2001 mission requirements. These 8 converted boats were removed from service on November 30, 2006 because of operational and safety concerns. The first patrol boat conversion was completed in March 2004, on the Matagorda. Between March 2004 and late August 2004, the Matagorda underwent additional maintenance that was not included in the contract to convert it to 123 feet, according to Coast Guard officials. On September 10, 2004, while en route to its home port in Key West, the Matagorda experienced hull and deck buckling, while transiting the Gulf of Mexico. By March 2005, 2 other converted 123-foot patrol boats, the Nunivak and the Padre, also began experiencing problems with hull buckling. That same month, similar hull deformations were discovered in 3 other 123-foot patrol boats—the Metompkin, Vashon, and Monhegan. As a result of the deteriorating hull conditions, Coast Guard imposed operational restrictions in April 2005 on the 123-foot patrol boats. These restrictions specified that the converted patrol boats could not operate in seas with wave heights exceeding 8 feet (they were originally intended to operate in seas up to roughly 13 feet) and that they had to operate at reduced speeds. Figure 1 provides a timeline of key events that led to the eventual removal from service of the 123-foot patrol boats. The Coast Guard is taking actions to mitigate the operational impacts resulting from the removal of the 123-foot patrol boats from service. Specifically, in recent testimony, the Commandant of the Coast Guard stated that Coast Guard has taken the following actions: multi-crewing certain 110-foot patrol boats with crews from the 123-foot patrol boats that have been removed from service so that patrol hours for these vessels can be increased; deploying other Coast Guard vessels to assist in missions formerly performed by the 123-foot patrol boats; and securing permission from the U.S. Navy to continue using 179-foot cutters on loan from the Navy for an additional 5 years (these were originally to be returned to the Navy in 2008) to supplement the Coast Guard’s patrol craft. We will continue to review the actions the Coast Guard is taking to mitigate the removal from service of the 123-foot patrol boats as part of our ongoing work. Our review of available data show that as of January 2007, of the 10 classes of Deepwater assets to be acquired or upgraded, 4 are ahead of schedule; 3 remain on schedule (and for 1 of these, design problems have arisen); and 3 are behind scheduled delivery and face design, funding, or technology challenges. Using the 2005 Deepwater Acquisition Program Baseline as the baseline, figure 2 indicates, for each asset class, whether delivery of the first-in-class (that is, the first of several to be produced in its class) is ahead of schedule, on schedule, or behind schedule, as of January 2007. Among the Deepwater assets, 3 of the 5 aircraft classes are upgrades to existing legacy systems, and these are all on or ahead of schedule; 1 new aircraft class is ahead of schedule; and the remaining new aircraft class is 6 years behind schedule. With respect to Deepwater vessels, all 5 asset classes are new, and of these, 2 are behind schedule, and a third, while on schedule, faces structural modifications. The remaining 2 new maritime assets are small vessels that are on or ahead of schedule at this time. Table 4 provides an overview of schedule status for the Deepwater aircraft and vessel classes. The status of each asset class, and our preliminary observations on the factors affecting their status, is discussed below. The LRI is a 36-foot small boat that is to be carried and deployed on each NSC and OPC. Coast Guard has one LRI on contract for delivery in August 2007, to match delivery of the first NSC. According to the Coast Guard, the SRP is on schedule at this time and 8 have been delivered to date. Coast Guard is currently planning to pursue construction and delivery of the remaining SRPs outside of the system integrator contract. By doing so, the Coast Guard expects to achieve a cost savings. The MPA is a commercial aircraft produced in Spain that is being acquired to replace the legacy HU-25 aircraft and will permit the Coast Guard to carry out missions, such as search and rescue, marine environmental protection, and maritime security. The first MPA was delivered to the Coast Guard in December 2006, and the second and third are due for delivery by April 2007. Pilots and aircrew participated in training classes in Spain, and Coast Guard is to take responsibility for the development and implementation of MPA’s maintenance and logistics. The LRS is an upgraded legacy fixed-wing aircraft that includes 6 C-130Js and 16 C-130Hs. The first aircraft entered the modification process in January 2007, and five additional aircraft are to be modified by July 2008. In fiscal year 2008, funding has been requested to upgrade the C-130H radar and avionics, and for the C-130J fleet introduction. The MRR is an upgraded legacy HH-60 helicopter. It began receiving a series of upgrades beginning in fiscal year 2006, which will continue into fiscal year 2012, including the service life extension program and radar upgrades. The MCH is an upgraded legacy HH-65 helicopter. According to Coast Guard officials, the MCH assets will not have a single delivery date, as the process involves three phases of upgrades. Phase I is the purchase and delivery of new engines and engine control systems, Phase II is a service- life extension program, and Phase III includes communications upgrades. A Coast Guard official stated that 84 of the 95 HH-65s should be re-engined by June 2007, and all 95 should be finished by October 2007. The fiscal year 2008 congressional justification states that Phase II began in fiscal year 2007 and will end in fiscal year 2014, and that Phase III is to begin in fiscal year 2008 and is to end in fiscal year 2014. According to Coast Guard documentation, the NSC is on schedule for delivery despite required modifications regarding its structural integrity. In particular, the Coast Guard Commandant recently stated that internal reviews by Coast Guard engineers, as well as by independent analysts, have concluded that the NSC, as designed, will need structural reinforcement to meet its expected 30-year service life. In addition, the DHS Office of Inspector General recently reported that the NSC design will not achieve a 30-year service life based on an operating profile of 230 days underway per year in general Atlantic and North Pacific sea conditions and added that Coast Guard technical experts believe the NSC’s design deficiencies will lead to increased maintenance costs and reduced service life. To address the structural modifications of the NSC, Coast Guard is taking a two-pronged approach. First, Coast Guard is working with contractors to enhance the structural integrity of the hulls of the remaining six NSCs that have not yet been constructed. Second, after determining that the NSC’s deficiencies are not related to the safe operation of the vessel in the near term, Coast Guard has decided to address the structural modifications of the hulls of the first two cutters as part of planned depot-level maintenance about 5 years after they are delivered. The Commandant stated that he decided to delay the repairs to these hulls to prevent further delays in construction and delivery. Coast Guard officials have stated that further work on the development of the OPC is on hold and the Coast Guard did not request funding for the OPC in fiscal years 2007 or 2008. Delivery of the first OPC has been delayed by 5 years—from 2010 to 2015. Concerns about the viability of the design of the FRC have delayed the delivery of the first FRC by at least 2 years. As we have previously reported, design and delivery of the original FRC was accelerated as a bridging strategy to offset the failed conversion of the 110-foot patrol boats into 123-foot patrol boats. According to the 2005 Deepwater Acquisition Program Baseline, the first FRC was scheduled to be delivered in 2007—11 years earlier than the 2018 date listed in the original (2002) Deepwater plan. Coast Guard suspended design work on the FRC in late February 2006; however, because of design risks, including excessive weight and horsepower requirements. As a result, Coast Guard is moving forward with a “dual-path approach” for acquiring new patrol boats to replace its existing 110-foot and 123-foot patrol boats. The first component of this dual path approach is to have the Deepwater system integrator purchase a commercial (off-the-shelf) patrol boat design that can be adapted for Coast Guard use. According to Coast Guard officials, unlike the original plans, this FRC class is not expected to meet all performance requirements originally specified, but is intended as a way to field an FRC more quickly than would otherwise occur and that can, therefore, serve as an interim replacement for the deteriorating fleet of 110-foot patrol boats. The Coast Guard Commandant recently stated that the Coast Guard expects delivery of the commercial FRCs in the first half of fiscal year 2010, about 2 years behind the estimated delivery date specified in the 2005 Deepwater Acquisition Program Baseline. The second component of the dual-path approach is to eventually acquire another cutter—a redesigned FRC. However, due to continuing questions about the feasibility of its planned composite hull, Coast Guard has now further delayed a decision about its development or acquisition until it receives results from two studies. First, the Coast Guard is conducting a business case analysis comparing the use of composite versus steel hulls. Second, the Coast Guard told us that DHS’s Science and Technology Directorate will be conducting tests on composite hull technology, and that it will wait to see the results of these tests before making a decision on the redesigned FRC. Until recently, the Coast Guard anticipated delivery of the redesigned FRC in 2009 or 2010. However, the decision to not request funding for this redesigned FRC in fiscal year 2008, and to await the results of both studies before moving forward, will likely further delay delivery of the redesigned FRC. According to the Coast Guard, evolving technological developments and the corresponding amount of funding provided in fiscal year 2006 have delayed the delivery of the VUAV by 6 years—from 2007 to 2013. As a result, the Coast Guard has adjusted the VUAV development plan. The fiscal year 2008 DHS congressional budget justification indicates that the Coast Guard does not plan to request funding for the VUAV through fiscal year 2012. Coast Guard originally intended on matching the NSC and VUAV delivery dates so that the VUAV could be launched from the NSC to provide surveillance capabilities beyond the cutter’s visual range or sensors. However, with the delay in the VUAV’s development schedule, it no longer aligns with the NSC’s initial deployment schedule. Specifically, Coast Guard officials stated that the VUAV will not be integrated with the NSC before fiscal year 2013, 6 years later than planned. Coast Guard officials stated that they are discussing how to address the operational impacts of having the NSC operate without the VUAV. In addition, Coast Guard officials explained that since the time of the original contract award, the Department of Defense has progressed in developing a different unmanned aerial vehicle—the Fire Scout—that Coast Guard officials say is more closely aligned with Coast Guard needs. Coast Guard has issued a contract to an independent third party to compare the capabilities of its planned VUAV to the Fire Scout. Since the inception of the Deepwater program, we have expressed concerns about the risks involved with the Coast Guard’s system-of- systems acquisition approach and the Coast Guard’s ability to manage and oversee the program. Our concerns have centered on three main areas: program management, contractor accountability, and cost control through competition. We have made a number of recommendations to improve the program—most of which the Coast Guard has agreed with and is working to address. However, while actions are under way, a project of this magnitude will likely continue to experience other problems as more becomes known. We will continue our work focusing on the Coast Guard’s efforts to address our recommendations and report on our findings later this year. In 2004, we reported that the Coast Guard had not effectively implemented key components needed to manage and oversee the system integrator. Specifically, we reported at that time and subsequently on issues related to integrated product teams (IPT), the Coast Guard’s human capital strategy, and communication with field personnel (individuals responsible for operating and maintaining the assets). Our preliminary observations on the Coast Guard’s progress in improving these program management areas, based on our ongoing work, follow. In 2004, we found that IPTs, the Coast Guard’s primary tool for managing the Deepwater program and overseeing the contractor, had not been effective due to changing membership, understaffing, insufficient training, lack of authority for decision making, and inadequate communication. We recommended the Coast Guard take actions to address IPT effectiveness. We subsequently reported that IPT decision-making was to a large extent stove-piped, and some teams lacked adequate authority to make decisions within their realm of responsibility. Coast Guard officials believed collaboration among the subcontractors was problematic and that the system integrator wielded little influence to compel decisions among them. For example, proposed design changes to assets under construction were submitted as two separate proposals from both subcontractors rather than one coherent plan. According to Coast Guard performance monitors, this approach complicated the government review of design changes because the two proposals often carried overlapping work items, thereby forcing the Coast Guard to act as the system integrator in those situations. Although some efforts have been made to improve the effectiveness of the IPTs—such as providing them with more timely charters and entry-level training—our preliminary observations are that more improvements are needed. The Coast Guard’s ability to assess IPT performance continues to be problematic. Former assessments of IPT effectiveness simply focused on measures such as frequency of meetings, attendance, and training. As a result, IPTs received positive assessments while the assets under their realm of responsibility—such as the National Security Cutter—were experiencing problems. The new team measurements include outcome- based metrics such as cost and schedule performance of assets (ships, aircraft, and command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR)). However, Deepwater’s overall program management quarterly report shows that the connection between IPT performance and program results continues to be misaligned. For example, the first quarterly report to incorporate the new measurements, covering the period October to December 2006, indicates that the IPTs’ performance for all domains is “on-schedule or non-problematic” even while some assets’ cost or schedule performance is rated “behind schedule or problematic.” Further, even though the Deepwater program is addressing fundamental problems surrounding the 123-foot patrol boat and FRC, IPTs no longer exist for these assets. In some cases, Coast Guard officials stated they have established work groups outside of the existing IPT structure to address identified issues and problems related to assets, such as the NSC. We will continue to review the IPTs’ roles and relevance in the management of the Deepwater program. We also reported in 2004 that the Coast Guard had not adequately staffed its program management function for Deepwater. Although its Deepwater human capital plan set a goal of a 95 percent or higher “fill rate” annually for both military and civilian personnel, funded positions were below this goal. We recommended that the Coast Guard follow the procedures in its Deepwater human capital plan to ensure that adequate staffing was in place and that turnover of Coast Guard military personnel was proactively addressed. The Coast Guard subsequently revised its Deepwater human capital plan in February 2005 to emphasize workforce planning, including determining needed knowledge, skills, and abilities and developing ways to leverage institutional knowledge as staff rotate out of the program. We reported in 2005 that the Coast Guard also took some short-term steps to improve Deepwater program staffing, such as hiring contractors to assist with program support functions, shifting some positions from military to civilian to mitigate turnover risk, and identifying hard-to-fill positions and developing recruitment plans specifically for them. However, in February 2007, Coast Guard officials told us that key human capital management objectives outlined in the revised plan have not been accomplished and that the staffing levels needed to accomplish the known workload have not been achieved. In one example, a manager cited the need for five additional staff per asset under his domain to satisfy the current workload in a timely manner: contracting officer’s technical representative, scheduler, cost estimator, analyst, and configuration manager. Further, a February 2007 independent analysis found that the Coast Guard does not possess a sufficient number of acquisition personnel or the right level of experience needed to manage the Deepwater program. The Coast Guard has identified an acquisition structure re- organization that includes human capital as one component of the reform. We will continue to monitor the implementation of the reorganization as part of our ongoing work. In 2004, we found that the Coast Guard had not adequately communicated to field personnel decisions on how the new and old assets were to be integrated during the transition and whether Coast Guard or system integrator personnel—or both—would be responsible for maintenance. We recommended that the Coast Guard provide timely information and training on the transition to Deepwater assets. In 2006, we reported that the Coast Guard had taken some steps to improve communications between Deepwater program and field personnel, including having field personnel as members on some IPTs. However, we continued to express concerns that field personnel were not receiving important information regarding training, maintenance, and integration of new Deepwater assets. During our ongoing work, the field personnel involved in operating and maintaining the assets and Deepwater program staff we interviewed expressed continued concern that maintenance and logistics plans had not been finalized. Another official commented that there continues to be a lack of clarity defining roles and responsibilities between the Coast Guard and system integrator for maintenance and logistics. Coast Guard officials stated in fall 2006 that the system integrator was contractually responsible for developing key documents related to plans for the maintenance and logistics for the NSC and Maritime Patrol Aircraft. However, Deepwater program officials stated that because the Coast Guard was not satisfied with the level of detail provided in early drafts of these plans, it was simultaneously developing “interim” plans that it could rely on while the system integrator continued to develop its own versions. While the Coast Guard’s more active role may help its ability to ensure adequate support for Deepwater assets that are coming on-line in the near term, our on- going work will continue to focus on this issue. Our 2004 review revealed that the Coast Guard had not developed quantifiable metrics to hold the system integrator accountable for its ongoing performance. For example, the process by which the Coast Guard assessed performance to make the award fee determination after the first year of the contract lacked rigor. At that time, we also found that the Coast Guard had not yet begun to measure contractor performance against Deepwater program goals—the information it would need by June 2006 to decide whether to extend the system integrator’s contract award term by up to another 5 years. Additionally, we noted that the Coast Guard needed to establish a solid baseline against which to measure progress in lowering total ownership cost—one of the three overarching goals of the Deepwater program. Furthermore, the Coast Guard had not developed criteria for potential adjustments to the baseline. Preliminary observations from our ongoing work on the Coast Guard’s efforts to improve system integrator accountability follow. In 2004 we found the first annual award fee determination was based largely on unsupported calculations. Despite documented problems in schedule, performance, cost control, and contract administration throughout the first year, the program executive officer awarded the contractor an overall rating of 87 percent, which fell in the “very good” range as reported by the Coast Guard award fee determining official. This rating resulted in an award fee of $4 million of the maximum $4.6 million. The Coast Guard continued to report design, cost, schedule, and delivery problems, and evaluation of the system integrator’s performance continued to result in award fees that ranged from 87 percent to 92 percent of the total possible award fee (with 92 percent falling into the “excellent” range), or $3.5 to $4.8 million annually, for a total of over $16 million the first 4 years on the contract. The Coast Guard continues to revise the award fee criteria under which it assesses the system integrator’s performance. The current award fee criteria demonstrate the Coast Guard’s effort to use both objective and subjective measures and to move toward clarity and specificity with the criteria being used. For example, the criteria include 24 specific milestone activities and dates to which the system integrator will be held accountable for schedule management. However, we recently observed two changes to the criteria that could affect the Coast Guard’s ability to hold the contractor accountable. First, the current award fee criteria no longer contain measures that specifically address IPTs, despite a recommendation we made in 2004 that the Coast Guard hold the system integrator accountable for IPT effectiveness. The Coast Guard had agreed with this recommendation and, as we reported in 2005, it had incorporated award fee metrics tied to the system integrator’s management of Deepwater, including administration, management commitment, collaboration, training, and empowerment of the IPTs. Second, a new criterion to assess both schedule and cost management states that the Coast Guard will not take into account milestone or cost impacts determined by the government to be factors beyond the system integrator’s control. However, a Coast Guard official stated that there are no formal written guidelines that define what factors are to be considered as being beyond the system integrator’s control, what process the Coast Guard is going to use to make this determination, or who is ultimately responsible for making those determinations. The Deepwater program management plan included three overarching goals of the Deepwater program: increased operational effectiveness, lower total ownership cost, and customer satisfaction to be used for determining whether to extend the contract period of performance, known as the award term decision. We reported in 2004 that the Coast Guard had not begun to measure the system integrator’s performance in these three areas, even though the information was essential to determining whether to extend the contract after the first 5 years. We also reported that the models the Coast Guard was using to measure operational performance lacked the fidelity to capture whether improvements may be due to Coast Guard or contractor actions, and program officials noted the difficulty of holding the contractor accountable for operational effectiveness before Deepwater assets are delivered. We made a recommendation to Coast Guard to address these issues. According to a Coast Guard official, the Coast Guard evaluated the contractor subjectively for the first award term period in May 2006, using operational effectiveness, total ownership costs, and customer satisfaction as the criteria. The result was a new award term period of 43 of a possible 60 months. To measure the system’s operational effectiveness, the Coast Guard has developed models to simulate the effect of the Deepwater assets’ capabilities on its ability to meet its missions and to measure the “presence” of those assets. However, in its assessment of the contractor, the Coast Guard assumed full operational capability of assets and communications and did not account for actual asset operating data. Furthermore, the models still lacked the fidelity to capture whether operational improvements are attributable to Coast Guard or contractor actions. As a result the contractor received credit for factors beyond its control—although no formal process existed for approving such factors. Total ownership cost was difficult to measure, thus the contractor was given a neutral score, according to Coast Guard officials. Finally, the contractor was rated “marginal” in customer satisfaction. The Coast Guard has modified the award term evaluation criteria to be used to determine whether to grant a further contract extension after the 43-month period ends in January 2011. The new criteria incorporate more objective measures. While the three overall Deepwater program objectives (operational effectiveness, total ownership costs, and customer satisfaction) carried a weight of 100 percent under the first award term decision, they will represent only about a third of the total weight for the second award term decision. The criteria include items such as new operational effectiveness measures that will include an evaluation of asset-level key performance parameters, such as endurance, operating range, and detection range. The new award term criteria have de-emphasized measurement of total ownership cost, concentrating instead on cost control. Program officials noted the difficulty of estimating ownership costs far into the future, while cost control can be measured objectively using actual costs and earned value data. In 2004, we recommended that the Coast Guard establish a total ownership cost baseline that could be used to periodically measure whether the Deepwater system-of-systems acquisition approach is providing the government with increased efficiencies compared to what it would have cost without this approach. Our recommendation was consistent with the cost baseline criteria set forth in the Deepwater program management plan. The Coast Guard agreed with the recommendation at the time, but subsequently told us it does not plan to implement it. In our current work, we will explore the implication of the revised award term evaluation criteria and the Coast Guard’s ability to measure the overarching goals of the acquisition strategy. Establishing a solid baseline against which to measure progress in lowering total ownership cost is critical to holding the contractor accountable. The Coast Guard’s original plan, set forth in the Deepwater program management plan, was to establish as its baseline the dollar value of replacing assets under a traditional, asset-by-asset approach as the “upper limit for total ownership cost.” In practice, the Coast Guard decided to use the system integrator’s estimated cost of $70.97 billion plus 10 percent (in fiscal year 2002 dollars) for the system-of-systems approach as the baseline. In 2004, we recommended that the Coast Guard establish criteria to determine when the total ownership cost baseline should be adjusted and ensure that the reasons for any changes are documented. Since then, the Coast Guard established a process that would require DHS approval for adjustments to the total ownership cost baseline. The Deepwater Program Executive Officer maintains authority to approve baseline revisions at the asset or domain level. However, depending on the severity of the change, these changes are also subject to review and approval by DHS. In November 2005, the Coast Guard increased the total ownership cost baseline against which the contractor will be evaluated to $304 billion. Deepwater officials stated that the adjustment was the result of incorporating the new homeland security mission requirements and revising dollar estimates to a current year basis. Although the Coast Guard is required to provide information to DHS on causal factors and propose corrective action for a baseline breach of 8 percent or more, the 8 percent threshold has not been breached because the threshold is measured against total program costs and not on an asset basis. For example, the decision to stop the conversion of the 49 110-foot patrol boats after 8 hulls did not exceed the threshold; nor did the damages and schedule delay to the NSC attributed to Hurricane Katrina. During our ongoing work, Coast Guard officials acknowledged that only a catastrophic event would ever trigger a threshold breach. According to a Coast Guard official, DHS approval is pending on shifting the baseline against which the system integrator is measured to an asset basis. Further, our 2004 report also had recommendations related to cost control through the use of competition. We reported that, although competition among subcontractors was a key mechanism for controlling costs, the Coast Guard had neither measured the extent of competition among the suppliers of Deepwater assets nor held the system integrator accountable for taking steps to achieve competition. As the two first-tier subcontractors to the system integrator, Lockheed Martin and Northrop Grumman have sole responsibility for determining whether to provide the Deepwater assets themselves or hold competitions—decisions commonly referred to as “make or buy.” We noted that the Coast Guard’s hands-off approach to make-or-buy decisions and its failure to assess the extent of competition raised questions about whether the government would be able to control Deepwater program costs. The Coast Guard has taken steps to establish a reporting requirement for the system integrator to provide information on competition on a semi- annual basis. The system integrator is to provide detailed plans, policies, and procedures necessary to ensure proper monitoring, reporting, and control of its subcontractors. Further, reports are to include total procurement activity, the value of competitive procurements, and the subcontractors’ name and addresses. The system integrator provided the first competition report in October 2006. However, because the report did not include the level of detail required by Coast Guard guidelines, a Coast Guard official deemed that the extent of competition could not be validated by the information provided and a request was made to the system integrator for more information. We will continue to assess the Coast Guard’s efforts to hold the system integrator accountable for ensuring an adequate degree of competition. - - - - - Mr. Chairman, this concludes my testimony. I would be happy to respond to any questions Members of the Committee may have. For further information about this testimony, please contact: John Hutton, Acting Director, Acquisition and Sourcing Management, (202) 512-4841, huttonj@gao.gov Stephen L. Caldwell, Acting Director, Homeland Security & Justice, (202) 512-9610, caldwells@gao.gov In addition to the contacts named above, Penny Berrier Augustine, Amy Bernstein, Christopher Conrad, Adam Couvillion, Kathryn Edelman, Melissa Jaynes, Crystal M. Jones, Michele Mackin, Jessica Nierenberg, Raffaele Roffo, Karen Sloan, and Jonathan R. Tumin made key contributions to this report. Coast Guard: Status of Deepwater Fast Response Cutter Design Efforts, GAO-06-764 (Washington, D.C.: June 23, 2006). Coast Guard: Changes to Deepwater Plan Appear Sound, and Program Management Has Improved, but Continued Monitoring is Warranted, GAO-06-546 (Washington, D.C.: Apr. 28, 2006). Coast Guard: Progress Being Made on Addressing Deepwater Legacy Asset Condition Issues and Program Management, but Acquisition Challenges Remain, GAO-05-757 (Washington, D.C.: Jul. 22, 2005). Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges, GAO-05-651T (Washington, D.C.: Jun. 21, 2005). Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges, GAO-05-307T (Washington, D.C.: Apr. 20, 2005). Coast Guard: Deepwater Program Acquisition Schedule Update Needed, GAO-04-695 (Washington, D.C.: Jun. 14, 2004). Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight, GAO-04- 380 (Washington, D.C.: Mar. 9, 2004). Coast Guard: Actions Needed to Mitigate Deepwater Project Risks, GAO- 01-659T (Washington, D.C.: May 3, 2001). Coast Guard: Progress Being Made on Deepwater Project, but Risks Remain, GAO-01-564 (Washington, D.C.: May, 2, 2001). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.S. Coast Guard's Deepwater program was designed to upgrade or replace its aging legacy aircraft and vessels with assets focusing on the Coast Guard's traditional at-sea roles. After the September 11, 2001 terrorist attacks, the Coast Guard took on additional security missions, resulting in revisions to the Deepwater plan. GAO's prior work raised concerns about Coast Guard's efforts to upgrade or acquire assets on schedule, and manage and effectively monitor the system integrator. This testimony provides GAO's preliminary observations on (1) events and issues surrounding the Coast Guard's bridging strategy to convert the legacy 110-foot patrol boats to 123-foot patrol boats; (2) the status of the Coast Guard's efforts to acquire new or upgraded Deepwater assets; and (3) the Coast Guard's ability to effectively manage the Deepwater program, hold contractors accountable, and control costs through competition. GAO's preliminary observations are based on audit work performed from August 2006 to February 2007. Numerous events since January 2001 led up to the failure of the Coast Guard's bridging strategy to convert its legacy 110-foot patrol boats into 123-foot patrol boats. These converted boats were removed from service on November 30, 2006 because of operational and safety concerns. According to the Coast Guard Commandant, actions are being taken to mitigate the impact of the removal of these patrol boats on mission activities. For example, patrol hours of some 110-foot patrol boats have been increased through the addition of crews from the 123-foot patrol boats, and other Coast Guard vessels have been deployed to assist in carrying out missions. The delivery record for the 10 classes of upgraded or new Deepwater aircraft and vessels is mixed. Specifically, 7 of the 10 asset classes are on or ahead of schedule. Among these, 5 first-in-class assets have been delivered on or ahead of schedule; 2 others remain on time but their planned delivery dates are in 2009 or beyond; therefore, delays could still potentially occur. Three Deepwater asset classes are currently behind schedule due to various problems related to designs, technology, or funding. For example, the Fast Response Cutter (a new vessel), which had been scheduled for first-in-class delivery in 2007, has been delayed by at least 2 years in part because work on its design was suspended until technical problems can be addressed. From the program's outset, GAO has raised concerns about the risks involved with the Coast Guard's acquisition strategy. In 2004, GAO reported that program management, contractor accountability, and cost control were all challenges, and made recommendations in these areas. Insufficient staffing, ineffective performance measures, and the Coast Guard's lack of knowledge about the extent to which the contractor was using competition have contributed to program risk. The Coast Guard has taken some actions to address these issues. GAO plans to continue to assess the Coast Guard's Deepwater program, including its efforts to address GAO recommendations, and will report the findings later this year.
The Food Stamp Program helps low-income individuals and families obtain a more nutritious diet by supplementing their income with food stamp benefits. The average monthly food stamp benefit was about $70 per person during fiscal year 1997. The program is a federal-state partnership in which the federal government pays the cost of the food stamp benefits and 50 percent of the states’ administrative costs. The U.S. Department of Agriculture’s Food and Nutrition Service (FNS) administers the program at the federal level. The states’ responsibilities include certifying eligible households and calculating and issuing benefits to those who qualify. The Food Stamp Employment and Training Program, which existed prior to the Welfare Reform Act, was established to ensure that all able-bodied recipients registered for employment services as a condition of food stamp eligibility. The program’s role is to provide food stamp recipients with opportunities that will lead to paid employment and decrease dependency on assistance programs. In fiscal year 1997, the states were granted $79 million in federal employment and training funding and spent $73.9 million, or 94 percent of the grant. In the Balanced Budget Act of 1997, the Congress increased grant funding for the Food Stamp Employment and Training Program to a total of $212 million for fiscal year 1998 and specified that 80 percent of the total had to be spent to help able-bodied adults without dependents meet the work requirements. For fiscal year 1999, the Congress provided $115 million in employment and training funding. These funds remain available until expended. Employment programs that the states choose to offer may involve the public and private sectors. For example, Workfare, which qualifies as an employment program under the Welfare Reform Act, requires individuals to work in a public service capacity in exchange for public benefits such as food stamps. Some states also allow participants to meet the work requirements by volunteering at nonprofit organizations. However, under the Welfare Reform Act, job search and job readiness training are specifically excluded as qualifying activities for meeting the act’s work requirements. During April, May, and June 1998, a monthly average of about 514,200 able-bodied adults without dependents received food stamp benefits, according to information from the 42 states providing sufficient data for analysis. These adults represented about 3 percent of the monthly average of 17.5 million food stamp participants in the 42 states during that period. Of the 514,200 individuals, about 58 percent, or 296,400 of the able-bodied adults without dependents were required to meet the work requirements; 40 percent, or 208,200, were exempted from these requirements because they lived in geographic areas that had received waivers; and 2 percent, or 9,600, had been exempted by the states from the work requirements. (See app. I for state-by-state information.) The number of able-bodied adults without dependents receiving food stamp benefits has apparently declined in recent years, as has their share of participation in the program. For example, in 1995, a monthly average of 1.2 million able-bodied adults without dependents in 42 states participated in the Food Stamp Program, compared with the 514,200 individuals who participated in the period we reviewed. In addition, in 1995, 5 percent of food stamp participants were estimated to be able-bodied adults without dependents, compared with the 3 percent we identified through our survey of the states. FNS and state officials accounted for these differences by pointing out that (1) food stamp participation has decreased overall—from about 27 million per month nationwide in 1995 to about 19.5 million in April, May, and June 1998; (2) some able-bodied adults without dependents may have obtained employment and no longer needed food stamps; and (3) others who were terminated from the program may not have realized that they could regain eligibility for food stamp benefits through participation in state-sponsored employment and training programs or Workfare. Also, the states vary in the criteria they use for identifying able-bodied adults subject to the work requirements. During April, May, and June 1998, a monthly average of 23,600 able-bodied adults without dependents filled employment and training and/or Workfare positions in the 24 states that provided sufficient data for analysis. Fifteen of these states offered Workfare positions, 20 offered employment and training positions, and 11 offered both Workfare and employment and training positions. The 23,600 individuals accounted for about half of the 47,000 able-bodied adults without dependents who were offered state-sponsored employment and training assistance and/or Workfare positions. More specifically: Able-bodied adults without dependents filled about 8,000 Workfare positions per month, or 34 percent of the 23,700 Workfare positions offered by the 15 states with Workfare positions; Able-bodied adults without dependents filled about 15,600 employment and training positions per month, or 67 percent of the 23,300 employment and training positions offered by the 20 states. (See app. I for state-by-state information.) These 23,600 individuals accounted for about 17 percent of the 137,200 able-bodied adults without dependents who were subject to the work requirements in those states. Of the remaining 113,600, some may have been within the 3-month time frame for receiving food stamp benefits while not working, others may have met these requirements by finding jobs or Workfare positions on their own, and some may not have met the work requirements, thereby forfeiting their food stamp benefits. FNS and state officials said they could not yet explain the limited participation in employment and training and Workfare programs, but FNS officials and some states are trying to develop information on the reasons for low participation. In addition, some suggested that able-bodied adults without dependents participated to a limited extent in employment and training programs and Workfare because they (1) participate sporadically in the Food Stamp Program, (2) prefer not to work, or (3) believe that the relatively low value of food stamp benefits is not enough of an incentive to meet the work requirements. With only 3 months remaining in fiscal year 1998, the states were spending at a rate that would result in the use of significantly less grant funds for food stamp employment and training recipients than authorized. For the first three quarters of the fiscal year, through June 30, 1998, the states spent only 28.4 percent, or $60.2 million, of the $212 million in grants, according to FNS data. The rate of spending varied widely by state, ranging from 75 percent, or about $230,000 of the $307,000 authorized for South Dakota, to less than 1 percent, or $109,000 of the $13.4 million authorized for Michigan. Twenty-five of the states spent less than 20 percent of their grant funds, 17 spent between 20 and 49 percent, and 9 spent 50 percent or more. Also, according to preliminary fourth-quarter financial data reported to FNS, 43 states spent about $72 million, or 41 percent of the grant funds available to them for fiscal year 1998. (See app. II.) To better understand why the states were spending less of their grant funds than authorized, we interviewed food stamp directors and employment and training officials in 10 geographically dispersed states.In general, according to these officials, grant spending has been significantly less than authorized because (1) some states had a limited number of able-bodied adults without dependents who were required to work, (2) some states needed time to refocus their programs on able-bodied adults without dependents, and (3) some states reported that it was difficult to serve clients in sparsely populated areas because of transportation problems or the lack of appropriate jobs. When asked whether spending would change in fiscal year 1999, state officials had differing expectations. Officials from 4 of the 10 states—Georgia, Iowa, Ohio, and West Virginia—said that they anticipate spending about the same or less, and Pennsylvania officials were unsure whether spending would change. In contrast, officials from five states—Illinois, Michigan, Rhode Island, Texas, and Washington—anticipate increases in spending, mostly because of the improvements they have made to their employment and training programs. In discussing the rate of grant spending, officials of five states—Georgia, Pennsylvania, Washington, Texas, and West Virginia—said that the requirement to spend 80 percent of funds on able-bodied adults without dependents had caused them to decrease employment and training services to other food stamp participants. For fiscal year 1998, a maximum of 20 percent of the available grant funds—$42 million—was available for employment and training activities for other food stamp recipients, while $79 million had been provided for employment and training activities for all food stamp recipients in fiscal year 1997. State officials explained that prior to fiscal year 1998, most employment and training funds had been spent for food stamp participants who were not able-bodied adults without dependents. With the shift in funds to able-bodied adults without dependents, less has remained for the other food stamp recipients, who typically had constituted the majority of the employment and training participants in the past. Nevertheless, some of those not served by Food Stamp Employment and Training Programs may be eligible to receive employment and training through other federal and state programs. We provided USDA’s Food and Nutrition Service with a copy of a draft of this report for review and comment. We met with Food and Nutrition Service officials, who provided comments from the Food and Nutrition Service’s Office of General Counsel and the Director, Program Analysis Division, Office of Food Stamp Programs. The Food and Nutrition Service generally agreed with the contents of the report and provided technical and clarifying comments that we incorporated into the report as appropriate. To obtain information on the numbers of able-bodied adults without dependents who are receiving food stamps benefits, are required to meet work requirements, are exempted from the work requirements, and are participating in qualifying employment and training and/or Workfare programs, we surveyed the states and the District of Columbia. The survey data covered the months of April, May, and June 1998. We used the participation data for these months to estimate average monthly participation in the program. All states and the District of Columbia responded to our faxed questionnaire, and we contacted state officials as needed to verify their responses. Eighty-eight percent of the responses provided by 41 states and the District of Columbia were based on estimates and the remaining on data in state records. According to the state officials who provided estimates, their information systems were in the process of being revised and they plan to have actual data for fiscal year 1999. To obtain information on state spending of federal grants for employment and training programs, we obtained FNS’ grant funding data reported by the states and the District of Columbia for the first three quarters of fiscal 1998, the latest data that were available as of November 1998. We subsequently obtained preliminary financial data for the fourth quarter of fiscal year 1998, which are subject to change after financial reconciliation. To supplement these data, we interviewed state food stamp directors or employment and training officials in 10 geographically dispersed states, including Georgia, Illinois, Iowa, Michigan, Ohio, Pennsylvania, Rhode Island, Texas, Washington, and West Virginia. We performed our work in accordance with generally accepted government auditing standards from July through November 1998. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the appropriate Senate and House Committees; interested Members of Congress; the Secretary of Agriculture; the Administrator of FNS; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Please call me at (202) 512-5138 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix III. (continued) Average number of food stamp participants (by individual) in April, May, and June 1998 per Food and Nutrition Service’s (FNS) data. This option in the Food Stamp Program not exercised by the state. Data insufficient for analysis. Percent of fiscal year 1998 grant funds expended(continued) Numbers may not add due to rounding. Charles M. Adams, Assistant Director Patricia A. Yorkman, Project Leader Alice G. Feldesman Erin K. Barlow Nancy Bowser Carol Herrnstadt Shulman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on: (1) the number of able-bodied adults without dependents who are receiving food stamp benefits, the number who are required to meet the work requirements, and the number who are exempted from the requirements; (2) the number of able-bodied adults without dependents participating in qualifying employment and training or Workfare programs; and (3) the amounts of federal grant funds that states spent through the first three quarters of fiscal year 1998 for employment and training or workfare programs for food stamp recipients. GAO noted that: (1) in the 42 states providing sufficient data for analysis, a monthly average of about 514,200 able-bodied adults without dependents received food stamp benefits during April, May, and June 1998; (2) about 58 percent of these individuals were required to meet the work requirements, another 40 percent were not required to work because they lived in areas that were considered to have high unemployment or an insufficient number of jobs, and 2 percent had been exempted by the states from the work requirements; (3) in the 24 states providing sufficient data for analysis, a monthly average of 23,600 able-bodied adults without dependents filled state-sponsored employment and training or workfare positions; (4) these participants represented about 17 percent of the able-bodied adults without dependents who were required to work in those states to receive food stamp benefits; (5) these individuals also accounted for nearly half of the able-bodied adults without dependents who were offered employment and training assistance or workfare positions by these states; (6) as of June 30, 1998, all the states had spent only about 28 percent, or $60.2 million, of the $212 million available for state employment and training programs for food stamp recipients; (7) according to preliminary fourth-quarter financial data, 43 states had spent about $72 million, or 41 percent of the grant funds available to them for fiscal year 1998; and (8) according to federal and state officials, the low percentage of spending for food stamp employment and training programs occurred primarily because: (a) fewer able-bodied adults without dependents were required to work than anticipated and fewer than anticipated accepted this assistance; and (b) some states needed more time to refocus their food stamp employment and training programs to target these individuals.
Antibiotics are drugs that are used to treat bacterial infections. Antibiotics work by killing or slowing the growth of bacteria and they are not effective against nonbacterial infections, such as those caused by viruses. Antibiotic resistance is the result of bacteria changing in ways that reduce or eliminate the effectiveness of antibiotics to cure infection. Antibiotic use forces bacteria to either adapt or die in a process known as “selective pressure.” Selective pressure means that when an antibiotic is used, some bacteria will be killed by the antibiotic while other bacteria will survive. Bacteria are able to survive, in part, because they have certain genetic material that allows them to avoid the effects of the antibiotic. The surviving bacteria will multiply and pass on to future generations their genetic material that is coded for resistance to antibiotics. Any use of antibiotics—appropriate and inappropriate—creates selective pressure among bacteria. (For more information on resistant bacteria, see app. II). The inappropriate use of antibiotics, or the additional use of antibiotics that could have been avoided, can occur when healthcare providers prescribe antibiotics when they are not beneficial, such as to treat a viral infection, or when antibiotic treatments are not targeted to the specific bacteria causing the infection. Inappropriate antibiotic use also occurs when healthcare providers do not prescribe the correct antibiotic dose and duration of treatment. Further, inappropriate use includes when patients do not complete a full course of prescribed antibiotics. Individual consumers, health care facilities, pharmacies, and pharmaceutical manufacturers dispose of unused antibiotics using various methods. For the purposes of this report, the disposal of antibiotics refers to the discard of unused antibiotics by consumers, companies, and others. Common disposal methods for individual consumers include throwing unused antibiotics in the trash, flushing them down the toilet, and pouring them down the drain. According to EPA officials, healthcare facilities and pharmacies often return unused or expired drugs to contracted companies, known as reverse distributors, for manufacturer credit. The reverse distributor is then instructed by the manufacturer to return the unused drug to the manufacturer, or in most cases, the reverse distributor is instructed to dispose of the drugs. The unused drugs are then most likely incinerated as solid waste, subject to state and local environmental regulations. The federal guidelines on how consumers should properly dispose of their unused drugs, including antibiotics, recommend that consumers dispose of their unused drugs either by returning them through a drug take-back program, where available, or by mixing them with coffee grounds or kitty litter and throwing them in the household trash. Unused antibiotics intended for human use may enter the environment through various pathways such as sewage systems and landfills, depending upon the method of disposal and other factors. Unused antibiotics enter sewage systems after they are flushed down the toilet or poured down the drain. Unused antibiotics that enter the sewage system then flow to wastewater treatment plants where, if not removed during the treatment process, they are released into the environment, such as in rivers and streams, as wastewater effluent. In addition, some areas may use onsite septic systems to treat wastewater and in these systems wastewater is discharged below the ground’s surface. Unused antibiotics that are disposed of in the trash could enter the environment if landfills were to leak. Although modern landfills are designed with liners and systems to limit this process by rerouting leachate, that is, liquid generated in landfills, to wastewater treatment plants, the antibiotics that are contained in the leachate may ultimately enter the environment. This can occur if antibiotics are not removed during the wastewater treatment process. In general, wastewater treatment plants are not designed to remove low concentrations of drug contaminants, such as antibiotics. In addition, antibiotics that have been used by humans to treat infections can also enter the environment. Most used antibiotics enter the sewage systems after they are ingested and excreted by individuals because antibiotics are not fully absorbed by the human body. Like unused antibiotics that enter the sewage systems, used antibiotics flow from sewage systems to wastewater treatment plants and may be released into the environment as wastewater effluent or biosolids. Agricultural manure is another potential source of antibiotics entering the environment; some antibiotics used for agriculture are similar to those used by humans. Within HHS, the Centers for Disease Control and Prevention (CDC), FDA, and the National Institutes of Health (NIH) have responsibilities for protecting Americans from health risk, including risk associated with antibiotic-resistant infections. These agencies have a variety of responsibilities related to the surveillance, prevention, and research of infectious disease. CDC has a primary responsibility to protect the public health through the prevention of disease and health promotion. One of CDC’s primary roles is to monitor health, and part of this role involves monitoring antibiotic-resistant infections and the use of antibiotics. CDC’s statutory authority to conduct such surveillance derives from the Public Health Service Act. Tracking the emergence of antibiotic resistance, and limiting its spread, is also part of CDC’s mission. Consistent with this mission, CDC implements prevention strategies, such as educational programs, that are designed to limit the development and spread of antibiotic resistance and the agency monitors antibiotic prescriptions in humans to help reduce the spread of antibiotic resistance. Part of FDA’s responsibility for protecting the public health involves assuring the safety and efficacy of human drugs. FDA reviews and approves labels for antibiotics and provides educational information to consumers and healthcare providers about the appropriate use of antibiotics, and the risk of the development of antibiotic resistance associated with their inappropriate use. FDA also licenses vaccines for use in humans to prevent bacterial infections—including certain antibiotic- resistant infections—as well as viral infections and has the authority for the review of diagnostics, including tests to detect bacterial infections. As the nation’s medical research agency, NIH is responsible for conducting and funding medical research to improve human health and save lives. According to its research agenda on antibiotic resistance, NIH supports and conducts research on many aspects of antibiotic resistance, including studies of how bacteria develop resistance, the development of diagnostic tests for bacterial infections that are or are likely to become resistant to antibiotics, as well as clinical trials such as those to study the effective duration for antibiotic treatments. CDC, FDA, and NIH are also co-chairs of the Interagency Task Force on Antimicrobial Resistance (Task Force) and released A Public Health Action Plan to Combat Antimicrobial Resistance (Action Plan) in 2001. The Action Plan identified actions needed to address the emerging threat of antibiotic resistance and highlighted the need to improve federal agencies’ ongoing monitoring of antibiotic use and of antibiotic-resistant infections. Specifically, the Action Plan stated that establishing a national surveillance plan for antibiotic-resistant infections should be a high priority, and that improved monitoring of such infections was needed to identify emerging trends and assess changing patterns of antibiotic resistance as well as to target and evaluate prevention and control efforts. The Action Plan also specifically stated that surveillance of antibiotic use in humans should be a high priority and was needed to better understand the relationship between antibiotic use and antibiotic resistance. For example, identifying a specific pattern of antibiotic use associated with increased antibiotic resistance could support a response from policymakers, such as to affect change in antibiotic use practices. Further, improved antibiotic use monitoring would help identify prevention activities and anticipate gaps in the availability of existing antibiotics effective in treating bacterial infections. A revised draft Action Plan was published for public comment on March 16, 2011. EPA’s mission includes protecting Americans from significant environmental health risks. As part of its role, EPA sets national standards for the disposal of solid and hazardous waste and the quality of drinking water. EPA generally regulates the disposal of waste, including some unused or expired drugs, under the Resource Conservation and Recovery Act (RCRA). EPA also promulgates national requirements for drinking water quality of public water systems under the Safe Drinking Water Act (SDWA). EPA conducts research on topics related to human health and the environment, including research aimed at understanding drug disposal practices and the potential human and ecological health risks of drugs, such as antibiotics, found in the environment. Within DOI, USGS is responsible for providing scientific information to better understand the health of the environment, including our water resources. USGS conducts large-scale studies to gather information that can provide a basis for evaluating the effectiveness of specific policies; these studies can also be used to support decision making at the local and national levels—for example, decisions related to protecting water quality. In 1998, USGS initiated the Emerging Contaminants Project to improve the scientific understanding of the release of emerging contaminants to the environment, including where these contaminants originate and whether they have adverse effects on the environment. As part of the project, USGS has conducted national studies to measure the presence of unregulated contaminants, including antibiotics, in the environment, and conducts targeted local studies to assess the impact of specific pathways by which antibiotics can enter the environment. CDC has six surveillance systems that provide information to monitor antibiotic resistance that occurs in healthcare and community settings. According to CDC, public health surveillance is the ongoing and systematic collection, analysis, and interpretation of data for use in the planning, implementation, and evaluation of public health practice. The surveillance systems collect information about antibiotic resistance among certain bacteria that cause infections in humans, and the infections are transmitted either in healthcare settings or in the community. For example, CDC’s National Healthcare Safety Network (NHSN) monitors infections that occur in healthcare settings, including those that are resistant to antibiotics, such as MRSA, while CDC’s Active Bacterial Core Surveillance (ABCs) system monitors bacterial infections such as meningitis and pneumonia that are spread in the community or in healthcare settings. Table 1 provides information about the purpose of each CDC surveillance system that monitors antibiotic resistance and summarizes the settings in which the monitored infections are spread. (See app. III for additional information about each of the six systems.) Federal agencies do not routinely quantify the amount of antibiotics that are produced in the United States for human use, but sales data, which can be used to estimate the quantity of antibiotic production, show that over 7 million pounds of antibiotics were sold in 2009 for human use in the United States. These data indicate that most of the antibiotics sold have common characteristics, such as belonging to five antibiotic classes. Federal agencies, including FDA and USITC, do not routinely quantify antibiotic production for human use. FDA does collect annual information on the quantity of drugs that manufacturers distribute from new drug application (NDA) and abbreviated new drug application (ANDA) holders, but the data are not readily accessible. For each approved drug, NDA and ANDA holders are required to report annually to FDA the total number of dosage units of each strength or potency of the drug that was distributed (e.g., 100,000 5 milligram tablets) for domestic and foreign use. This information must be submitted to FDA each year— within 60 days of the anniversary date of approval of the drug application—for as long as the NDA or ANDA is active. The data that NDA and ANDA holders submit to FDA on the quantity of distributed drugs are not readily accessible because, according to an FDA official, they are submitted as part of an annual report in the form of a table and the agency does not enter the data electronically. In addition, because the anniversary dates of approval vary by NDA and ANDA, the reporting periods are not comparable. For drugs with an active ingredient for which there are multiple NDA and ANDA applications, FDA officials stated that one would also need to aggregate the data across multiple applications in order to determine the total quantity of the particular active ingredient. An FDA official told us that the agency rarely uses these data for analyses of drug utilization, drug safety, and drug shortages because other sources of data provide FDA information that is more detailed and timely about the quantities of certain drugs that are available in the market. For example, FDA uses drug sales data, which are available on a monthly basis, to evaluate and address drug safety and drug shortage problems. USITC no longer collects and quantifies antibiotic production, but did so until 1994. In 2009, approximately 7.4 million pounds of antibiotics were sold for human use—which can be used as an estimate of the quantity of antibiotics produced for human use in the United States—and most sold share common characteristics, such as antibiotic classes. Most of the 7.4 million pounds, or about 89 percent, of antibiotics that were sold in 2009 fell into five antibiotic classes: penicillins, cephems, folate pathway inhibitors, quinolones, and macrolides (see table 2). The class of penicillins was the largest group of antibiotics sold in 2009. About 3.3 million pounds of penicillins were sold, which represents 45.2 percent of all antibiotics sold in 2009. Penicillins, such as amoxicillin, are used to treat bacterial infections that include pneumonia and urinary tract infections. Most of the antibiotics that were sold for human use in 2009 were for oral administration and for use in outpatient settings. As shown in table 3, about 6.5 million pounds, or 87.4 percent, of all antibiotics sold for human use in 2009 were intended for oral administration, for example, in the form of pills. Oral forms of antibiotics and injectable forms, such as intravenous injections, together accounted for 99 percent of the total pounds sold. About 5.8 million pounds, or 78.6 percent, of all antibiotics sold for human use in 2009 were purchased by chain store pharmacies, independent pharmacies, food store pharmacies, and clinics (see table 4). This suggests that most of the antibiotics that were purchased in 2009 were intended for use in outpatient settings. Although CDC annually collects certain national data on antibiotic prescriptions to monitor the use of antibiotics, these data have limitations and do not allow for important analyses. CDC is taking steps to improve its monitoring of antibiotic use by collecting and purchasing additional data, but gaps in information will remain. CDC’s Get Smart program promotes the appropriate use of antibiotics and the agency has observed recent national declines in inappropriate antibiotic prescribing; however, it is unclear to what extent its program contributed to the recent declines. NIH and FDA activities have complemented CDC’s efforts to promote the appropriate use of antibiotics. CDC conducts two national health care surveys that gather data, annually, on antibiotic prescribing in outpatient settings—the National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS). NAMCS is based on a sample of visits to office-based physicians and community health centers. NHAMCS is based on a sample of visits to emergency and outpatient departments and hospital-based ambulatory surgery locations. Both surveys obtain data from healthcare provider records on patient symptoms, provider diagnoses, and the names of specific drugs, including antibiotics, that were prescribed during the patient visits. CDC officials stated that, among their purposes, CDC uses NAMCS and NHAMCS to monitor antibiotic use in outpatient settings for patient conditions that do not usually require antibiotics for treatment, such as antibiotic prescribing rates for upper respiratory infections, such as the common cold. NAMCS and NHAMCS are limited because they do not capture information about the use of antibiotics in inpatient settings. In inpatient settings, such as hospitals, antibiotics are often used, multiple antibiotics may be used in the same patient, and use may be prolonged. Monitoring overall antibiotic use (i.e., in inpatient and outpatient settings) over time is important for understanding patterns in antibiotic resistance. Information about overall antibiotic use in humans is also needed to routinely assess the contribution that human antibiotic use makes to the overall problem of antibiotic resistance in humans, relative to other contributing factors. For example, monitoring what portion of antibiotic use is attributed to humans versus animals is important to understanding antibiotic resistance. CDC officials told us that more complete information about antibiotic use by humans and animals is needed to help interpret trends from surveillance data and to inform on possible strategies to control the spread of antibiotic resistance, such as through changing antibiotic use practices. NAMCS and NHAMCS data are further limited because they do not allow the agency to assess geographic patterns in antibiotic prescribing practices in outpatient settings. CDC officials told us that the survey samples were designed to obtain national, not state-level estimates. As a result, CDC cannot currently assess the potential effects of geographic variation at the state level in antibiotic prescribing rates on patterns of antibiotic resistance or identify states or other geographic areas in the United States, for instance, which have higher than average antibiotic prescribing for conditions that do not usually require antibiotics for treatment. Information about geographic variation in antibiotic prescribing would allow CDC to anticipate future patterns in antibiotic resistance, given that the use of antibiotics has a direct effect on antibiotic resistance. Such information, according to CDC officials, would also allow CDC to target prevention efforts, such as those aimed at reducing inappropriate antibiotic use. CDC is taking steps to improve its monitoring of antibiotic use, but gaps in information about the use of antibiotics will remain. To address the agency’s lack of data on inpatient antibiotic use, CDC is planning to gather information on antibiotic use with a prevalence survey of U.S. acute care hospitals in 2011. The survey will be conducted during a single time period on a single day and will collect some patient information about the reasons for the antibiotic use, which include treating an active infection or using antibiotics to prevent infection associated with a medical or surgical procedure. According to CDC officials, these data will fill in the gap in its data by providing information about the prevalence of inpatient antibiotic use. CDC officials further stated that having data on the baseline amount of inpatient antibiotic use, and the reasons for that use, will allow the agency to target and evaluate its own prevention efforts. However, the survey findings will not be representative of hospitals nationwide, because the survey sample is limited to selected hospitals located within five entire states and urban areas in five other states. Furthermore, CDC officials do not know if the survey will be repeated. Without periodic data collection and monitoring, CDC cannot assess trends in inpatient antibiotic use or evaluate the effects that changes in antibiotic use may have on antibiotic resistance. Additionally, in 2011, CDC officials told us that the agency plans to reinstate a module of NHSN that will allow participating facilities to report their inpatient antibiotic use, which will provide CDC with some inpatient antibiotic use data, but these data will not be nationally representative. In 2009, CDC temporarily discontinued this module because, according to CDC officials, it was not sustainable due to the high burden on facilities to report such data. CDC has redesigned the module to reduce the reporting burden on facilities; for example, CDC officials told us that, instead of relying on manual entry, facilities will be able to electronically capture and automatically send their data to NHSN. While the module will allow facilities in NHSN to monitor their own antibiotic use, the data will not provide the agency with information about the prevalence of inpatient antibiotic use because NHSN is not based on a nationally representative sample of facilities. To improve CDC’s monitoring of antibiotic use in outpatient settings, CDC officials told us that they are finalizing a contract with a private data vendor to obtain 5 years of national data on antibiotic prescribing in outpatient settings by antibiotic drug, county, and type of provider. According to CDC officials, these data will help the agency understand relationships between antibiotic use and antibiotic resistance in certain geographic areas. CDC officials further stated that these data would help guide the agency’s prevention efforts. With preliminary data on outpatient prescriptions for the antibiotic subclass of fluoroquinolones, CDC has shown wide variation in prescribing across states. Further, CDC plans to increase the size of the NAMCS sample at least fourfold in 2012, which would allow CDC to produce antibiotic prescribing rates for some states that year. CDC’s Get Smart: Know When Antibiotics Work (Get Smart) program promotes appropriate antibiotic use, which is aimed specifically at healthcare providers, patients, and parents of young children. CDC launched its Get Smart program in 1995 with the overall goal of reducing the increasing rate of antibiotic resistance. The program is primarily focused on upper respiratory infections because, according to CDC, such infections account for over half of all antibiotics prescribed by office- based physicians. The Get Smart program works with partners, such as certain health insurance companies, to develop and distribute educational materials. With the goal of educating healthcare providers and the public, the Get Smart educational materials are aimed directly at these populations. For example, the Get Smart program supported the development of an online training program for healthcare providers to improve their knowledge and diagnosing of middle ear disease. The Get Smart program developed and launched a national media campaign in 2003, in partnership with FDA, to provide a coordinated message on appropriate antibiotic use to the public and this message has been disseminated through print, television, radio, and other media. For example, CDC developed a podcast for parents of young children, available on CDC’s Web site, to communicate its message. In the podcast, a pharmacist counsels a frustrated mother about appropriate antibiotic use and symptomatic relief options for her son’s cold. Some materials are aimed at healthcare providers with the goal of educating their patients; for example, the Get Smart program developed a prescription pad for symptoms of viral infections. Healthcare providers can use the communication tool to acknowledge patient discomfort and recommend strategies to their patients for the relief of symptoms associated with viral illnesses—without prescribing an antibiotic unnecessarily. The prescription sheet includes the Get Smart logo and provides information for patients about the appropriate use of antibiotics to treat bacterial infections. CDC has continued to update and expand its materials for the Get Smart program. For example, CDC officials stated that the agency has expanded its educational materials by partnering with Wake Forest University to develop a curriculum for medical students related to appropriate antibiotic prescribing, and the impact of antibiotic use and its inappropriate use on antibiotic resistance, and the agency has developed a continuing education course for pharmacists. CDC officials told us that pharmacists serve as one of the most important health care professionals in promoting appropriate antibiotic use, for example by educating patients about the importance of taking antibiotics exactly as directed. In November 2010, CDC launched another Get Smart program, called Get Smart for Healthcare. This program focuses on improving antibiotic use in inpatient healthcare settings—including hospitals and nursing homes—through antimicrobial stewardship. CDC has observed declines in inappropriate antibiotic prescribing in outpatient settings since its Get Smart program began in 1995, but it is unclear to what extent this program contributed to these trends. For example, using NAMCS and NHAMCS data, CDC found about a 26 percent decline in the number of courses of antibiotics prescribed per 100 children younger than 5 years old for ear infections between 1996-1997 and 2006. Further, CDC reported about a 53 percent decrease in the antibiotic prescription rate for the common cold among all persons between 1996- 1997 and 2006. A similar trend in antibiotic prescribing among children has also been observed with data from the National Committee for Quality Assurance (NCQA). NCQA monitors trends in antibiotic prescribing for the purpose of comparing the performance of healthcare plans. NCQA monitors the percentage of children 3 months to 18 years of age who were diagnosed with an upper respiratory infection and did not receive an antibiotic prescription within 3 days of the office visit, and this measure has shown improvement (i.e., percentage increases in appropriate treatment) between 2003 and 2008. The measures that CDC uses to evaluate the effectiveness of the Get Smart program do not necessarily reflect the effect of the program because they do not capture information about individuals who were exposed to the Get Smart program, compared to those who were not. As a result, it is unclear if the declines in the inappropriate antibiotic prescribing were due to exposure to Get Smart messages and educational materials or from other factors, such as efforts to measure healthcare performance with antibiotic prescribing indicators (e.g., NCQA measures) or the recommended use of influenza vaccines among young children, since 2004. CDC officials told us that they believe the NCQA measures have helped to improve appropriate antibiotic prescribing by improving knowledge of treatment guidelines by physicians and practitioners. In addition, reducing the number of cases of influenza among children is likely to have contributed to declines in inappropriate antibiotic prescriptions because antibiotics are often prescribed in patients with influenza symptoms. The measures that CDC uses to evaluate the effectiveness of the Get Smart program also do not allow CDC to determine, for example, whether declines in inappropriate antibiotic prescribing are attributable to a decrease in demand for antibiotics by patients, or to improved adherence to appropriate prescribing guidelines by healthcare providers. The measures are further limited because they do not allow CDC to determine whether the observed declines are consistent across the United States or are due to decreases in certain geographic areas. CDC officials told us that they rely on other indicators to demonstrate the effectiveness of the Get Smart Program, such as interest in CDC’s Get Smart Web site and media materials. According to these officials, studies examining the impact of educational materials, including Get Smart materials, further demonstrate the effectiveness of the Get Smart program. For example, CDC officials cited a study in Massachusetts where educational materials, including Get Smart materials, were distributed to physicians and their patients in several communities. Findings indicate that in communities where educational and promotional materials about appropriate antibiotic use—including Get Smart materials—were distributed, antibiotic prescribing rates for children declined. Declines were also observed in communities where these educational and promotional materials were not distributed. These findings indicate that factors other than educational and promotional materials focused on the appropriate use of antibiotics may also have led to declines in inappropriate antibiotic prescribing. Without information about which are the most effective ways to reduce inappropriate antibiotic prescribing in outpatient and inpatient settings, CDC cannot target its resources on these preventive approaches. NIH and FDA have complemented CDC’s efforts to promote the appropriate use of antibiotics in humans through various activities. NIH supports research specifically aimed at decreasing the inappropriate use of antibiotics as part of its research agenda to target antibiotic resistance. NIH-funded studies focus on establishing appropriate antibiotic treatment courses, using off-patent antibiotics to treat infections, and developing rapid diagnostic tests to help healthcare providers choose an appropriate antibiotic for treatment. For example, in 2009, NIH began funding a clinical trial to determine whether the standard 2-week antibiotic treatment course for children with urinary tract infections can remain effective if shortened, thereby decreasing the likelihood of antibiotic resistance and preserving the effectiveness of existing antibiotics. In 2007, NIH awarded two 5-year contracts to study whether off-patent antibiotics such as clindamycin and a combination of the drugs trimethoprim and sulfamethoxazole can be used to treat certain skin infections instead of the more recently developed antibiotics, such as Linezolid and Vancomycin, in order to preserve the newer drugs’ effectiveness. Further, since 2002, NIH has supported the development of a new test to rapidly diagnose TB. It currently takes up to 3 months to accurately diagnose TB and to determine its resistance to antibiotics, according to NIH officials. Findings from a recent clinical trial study reported that, within 2 hours, the new test can diagnose a TB infection and determine if it is resistant to the antibiotic rifampin, which is commonly used to treat TB. NIH officials stated that the test is being recommended by the World Health Organization for the early diagnosis of TB and NIH is currently supporting research to improve the test and expand its capabilities. Research on the development of vaccines for bacterial and viral infections is also part of NIH’s research agenda to decrease the inappropriate use of antibiotics, according to an NIH official. An NIH official stated that the agency has funded the discovery and development of several staphylococcal vaccine candidates, for example, through investigator- initiated grants. In addition, an NIH official told us that NIH conducted preclinical animal studies that provided data for the development of a multivalent staphylococcal vaccine candidate, which allowed the candidate to advance to clinical testing. NIH also supports the development of vaccines for viral infections. According to an NIH official, decreasing the occurrence of influenza infections with influenza vaccines may decrease the inappropriate use of antibiotics. Many healthcare providers inappropriately treat viral respiratory infections with antibiotics, so preventing influenza reduces the opportunities for unnecessary antibiotic treatment. FDA activities also complement CDC’s efforts to promote the appropriate use of antibiotics in humans. According to an FDA official, the agency collaborated with CDC on certain Get Smart activities, such as developing an appropriate antibiotic use message for the national media campaign, and amended its drug labeling regulations in 2003 to require that all oral or intravenous antibiotics for human use include additional information on their appropriate use. FDA’s labeling requirement is intended to encourage physicians to prescribe antibiotics only when clinically necessary and to encourage them to counsel their patients about the proper use of such drugs and the importance of taking them exactly as directed. For example, the amended regulation requires that antibiotic labeling include the statement that “prescribing in the absence of a proven or strongly suspected bacterial infection is unlikely to benefit the patient and increases the risk of the development of drug- resistant bacteria.” CDC’s monitoring of antibiotic-resistant infections has limitations in assessing the overall problem of antibiotic resistance. The agency’s monitoring of antibiotic-resistant infections in healthcare facilities has data gaps that limit CDC’s ability to produce accurate national estimates of such infections. For some of these infections monitored by CDC in community settings, in comparison, CDC can provide accurate national estimates. CDC is taking steps to improve its monitoring of antibiotic- resistant infections in healthcare settings, but these efforts will not improve CDC’s ability to assess the overall problem of antibiotic resistance. A sample of healthcare facilities that is not representative—and incomplete information about the entire scope of healthcare-associated infections (HAIs) that are resistant to antibiotics—present data gaps that limit CDC’s ability to produce accurate national estimates of antibiotic resistant HAIs in healthcare settings. Some infections are acquired as a result of medical treatment in a healthcare setting, such as a hospital or outpatient unit, while others are transmitted in the community, such as respiratory infections that are spread in schools and the workplace. According to CDC officials, healthcare settings contribute to the development of antibiotic resistance because of their high volume of susceptible patients, large number of disease-causing bacteria, and high antibiotic usage. CDC uses NHSN to monitor HAIs, including antibiotic- resistant HAIs, at a national level, but the facilities that participate are not a nationally representative sample. Facility enrollment and participation in NHSN is either voluntary, required because of a state mandate, or obligated as a condition of participation in HHS’ Centers for Medicare & Medicaid Services (CMS) Hospital Inpatient Quality Reporting program. According to CDC officials, as of January 2011, 23 states and territories required, or had plans to require, healthcare facilities to use NHSN for their reporting mandate. As of January 1, 2011, all acute care hospitals participating in the CMS Hospital Inpatient Quality Reporting Program are obligated to report into NHSN central-line associated bloodstream infections for certain procedures from their intensive care units. Although the number of participating facilities has increased substantially, because healthcare facilities enroll voluntarily or by mandate, this group of facilities is not representative of facilities nationwide, as a random sample would be. Participating healthcare facilities in states with mandated participation are more likely to be overrepresented in the sample, while facilities in states without mandates are more likely to be underrepresented. The data that participating healthcare facilities supply to NHSN do not reflect the full scope of HAIs that occur within these facilities, further limiting CDC’s ability to provide accurate national estimates about antibiotic-resistant HAIs. Participating facilities may submit data about different types of HAIs, and this includes information about whether the HAIs are resistant to antibiotics. For example, some facilities report data to NHSN on central-line associated bloodstream infections but not other infection types, such as catheter-associated urinary tract infections. Further, participating healthcare facilities may report HAI data to NHSN for certain units within facilities. For example, participating facilities may report data to NHSN on infections that occur in intensive care units but not those that occur in specialty care areas. CDC depends on the microbiology data provided by participating facilities to determine, among reported cases, the number and percentage of certain types of HAIs with resistance to certain antibiotics. Without an accurate national estimate of antibiotic-resistant HAIs, CDC cannot assess the magnitude and types of such infections that occur in all patient populations (i.e., facilitywide) within healthcare settings. CDC’s monitoring of antibiotic-resistant infections in community settings can provide accurate national estimates of antibiotic-resistant infections that are caused by 5 of the 12 bacteria that the agency monitors. These 5 are captured by two surveillance systems, the National Antimicrobial Resistance Monitoring System for Enteric Bacteria (NARMS: EB) and the National Tuberculosis Surveillance System (NTSS), which collect nationally representative data about certain antibiotic-resistant infections; these infections can occur in community settings. Both systems employ sampling strategies that can provide accurate national estimates by collecting representative case information from all 50 states. For NARMS: EB, health departments in all 50 states submit a representative sample of four of the five bacteria it monitors—non- typhoidal Salmonella, typhoidal Salmonella, Shigella, and Escherichia coli O157 cases to NARMS: EB for antibiotic susceptibility testing. To ensure adequate sample size and a random sample for testing, the health departments systematically select and submit to NARMS: EB every 20th non-typhoidal Salmonella, Shigella, and Escherichia coli O157 case as well as every typhoidal Salmonella case received at their laboratories. NARMS: EB cannot produce an accurate national estimate for one of the five bacteria it monitors—Campylobacter—because according to CDC officials, the system collects a sample of the bacteria in 10 states. CDC uses NTSS to collect information about each newly reported case of tuberculosis infection in the United States, including information on drug susceptibility results for the majority of cases that test positive for tuberculosis. CDC’s monitoring of other bacteria that cause antibiotic-resistant infections in community settings cannot provide estimates that are nationally representative because they are derived from samples that do not accurately represent the entire United States. Through ABCs, CDC conducts antibiotic resistance surveillance of five infection-causing bacteria—group A and B Streptococcus, Neisseria meningitidis, Streptococcus pneumoniae, and MRSA. According to CDC officials, these bacteria cause bloodstream infections, sepsis, meningitis, and pneumonia. ABCs is a collaboration between CDC, state health departments, and universities in 10 states. CDC officials told us that for each identified case of infection within their surveillance populations, the ABCs sites conduct a chart review to collect a variety of information, such as underlying disease and risk factors, vaccination history, and demographic information. This information is entered into a case report form and submitted to CDC along with bacterial isolates for additional testing, including tests for antibiotic resistance. ABCs’ monitoring of cases of resistant infections is limited to surveillance areas in 10 states, and the surveillance areas vary somewhat depending on the infection-causing bacterium that is monitored. For example, Neisseria meningitidis is monitored in 6 entire states and in primarily urban areas in 4 other states while MRSA is monitored in 1 entire state and primarily urban areas in 8 other states. According to CDC’s Web site, the population included in the ABCs surveillance areas is roughly representative of the U.S. population on the basis of certain demographic characteristics (e.g., race and age) and urban residence. However, ABCs cannot provide estimates that are nationally representative for rural residence, and some experts have raised concerns because of the underrepresentation of rural areas. Further, since surveillance is critical to providing early warning of emerging resistance problems, limited geographic coverage among monitored infection-causing bacteria impedes CDC’s ability to detect emerging problems. The Gonococcal Isolate Surveillance Project (GISP), which CDC uses to monitor antibiotic resistance in Neisseria gonorrhoeae, the bacterium that causes gonorrhea, cannot provide accurate national estimates of cases of antibiotic-resistant gonorrhea because it collects information only on selected patient populations. Each month, GISP collects case samples from the first 25 men diagnosed with urethral gonorrhea in each participating sexually transmitted disease clinic. The clinics are located in 24 states and they send these samples to designated laboratories for antibiotic susceptibility testing. However, according to CDC officials, most cases of gonorrhea in the United States are not treated in sexually transmitted disease clinics, and are more likely treated in a variety of healthcare settings, such as primary care physicians’ offices. Further, since GISP collects information on cases of gonorrhea from male patients only, the data cannot represent the total U.S. population in order to provide an accurate national estimate of resistant gonorrhea cases. CDC is taking steps to improve its monitoring of antibiotic-resistant infections in healthcare facilities, but CDC’s ability to assess the overall problem of antibiotic resistance will not be improved. With a prevalence survey, CDC is planning to collect additional data in 2011 about HAIs, which may provide more comprehensive information about certain types of HAIs that are resistant to antibiotics. According to CDC officials, the survey of U.S. acute care hospitals—which will also provide data on antibiotic use, as described previously—will allow the agency to more accurately assess the burden of HAIs and antibiotic resistance among those HAIs in healthcare settings. Unlike NHSN, the survey is designed to allow CDC to assess the magnitude and types of HAIs occurring in all patient populations within the sample of acute care hospitals. The survey will collect information about types of infection (e.g., urinary tract infection, bloodstream infection), bacteria causing HAIs, and test results regarding antibiotic resistance. The survey will not collect resistance information for all bacteria that cause HAIs. However, according to CDC officials, the survey will collect resistance information for some of the most common bacteria that cause HAIs, including Acinetobacter, Enterococcus faecalis, Enterococcus faecium, Escherichia coli, Klebsiella, Pseudomonas aeruginosa, and Staphylococcus aureus. While the survey may provide more comprehensive information about certain types of HAIs that are resistant to antibiotics because it is designed to cover all patient populations in the sampled hospitals, the survey will not be able to provide information about the prevalence of all antibiotic-resistant HAIs that occur in U.S. acute care hospitals. A further limitation is that the sample is not representative of U.S. acute care hospitals. As described earlier, this is because the survey is based on a sample of acute care hospitals located within the EIP surveillance areas, according to CDC officials. CDC also plans to enhance its monitoring of HAIs by expanding the geographic coverage of its surveillance of Clostridium difficile infections and CDC officials told us that the agency is piloting additional surveillance for gram-negative infections through the EIP network. According to CDC, the agency began monitoring Clostridium difficile infections through EIP in 2009 in 7 surveillance areas, to obtain more comprehensive and representative information about this infection, including for antibiotic resistance. CDC officials stated that the agency plans to expand its Clostridium difficile monitoring to 10 surveillance areas by summer 2011. In 2 of the 10 surveillance areas (i.e., Oregon and Minnesota), surveillance will occur in rural areas only. CDC officials stated that the data will allow the agency, among other things, to detect Clostridium difficile infections that occur prior to admission to a healthcare facility and to identify new populations at risk. CDC officials also told us that the agency is piloting surveillance for gram-negative infections that are resistant to multiple antibiotics, through the EIP network, as an exploratory effort and feasibility study on how to improve the agency’s monitoring of these infections in healthcare settings. In addition, CDC anticipates that the number of acute care hospitals participating in NHSN will expand in 2011 stemming from the CMS Hospital Inpatient Quality Reporting Program obligation to do so. The expanded participation will, CDC officials believe, result in more representative data about certain HAIs and antibiotic-resistant infections. CMS has expanded its quality data measures to include two HAI measures that will be reported through NHSN. As stated previously, as of January 1, 2011, hospitals are obligated to report on central-line bloodstream infections associated with certain procedures from their intensive care units and on January 1, 2012, hospitals will be obligated to report on surgical site infections. Hospitals will also need to report on antibiotic resistance associated with these two types of infections, given NHSN’s reporting requirements for participation. As part of CDC’s protocols, facilities submit microbiological data for each HAI identified, which includes the type of bacteria causing the infection and test results regarding antibiotic resistance. Federal agencies do not collect data regarding the disposal of most antibiotics intended for human use, but EPA and USGS have measured the presence of certain antibiotics in the environment due, in part, to their disposal. Studies conducted by scientists have found that antibiotics that are present in the environment at certain concentration levels can increase the population of resistant bacteria due to selective pressure. EPA does not monitor the disposal of most antibiotics intended for human use, but EPA and USGS have measured the presence of antibiotics in the environment, including water, soil, and sediment. According to EPA, antibiotics enter the environment through various pathways into water, soil, and sediment, such as wastewater discharged from treatment plants. The disposal of hazardous waste, such as chemicals that are harmful to human health when ingested, is regulated by EPA. Under RCRA, EPA has established a system by which hazardous waste is regulated from the time it is produced until it is disposed. Under this system, EPA receives information from hazardous waste generators through the Biennial Reporting System. EPA officials told us that antibiotics in general do not fall under RCRA’s definition of hazardous waste; as a result, EPA does not generally receive information about the disposal of antibiotics. EPA officials further stated that the agency would receive limited information about antibiotics if they fell under RCRA’s definition of hazardous waste. However, in part because it is the responsibility of the person disposing of a waste to determine whether or not it is hazardous, agency officials could not identify any specific antibiotics that fall under EPA’s regulatory definition of hazardous waste and therefore concluded that it would be a rare occurrence for the agency to receive information on the disposal of antibiotics. Under SDWA, EPA is authorized to regulate contaminants in public drinking water systems. EPA generally requires public water systems to monitor certain contaminants for which there are national primary drinking water regulations—standards limiting the concentration of a contaminant or requiring certain treatment. EPA has not promulgated any drinking water regulation for an antibiotic. EPA is required to identify and publish a list every 5 years of unregulated contaminants that may require regulation, known as the Contaminant Candidate List (CCL). EPA generally uses this list to select contaminants for its periodic regulatory determinations, by which the agency decides whether to regulate a contaminant, but contaminants may remain on the CCL for many years before EPA makes such a decision. Erythromycin is the only antibiotic on the third CCL list (CCL 3)—the current CCL that was published in October 2009. According to EPA officials, the agency is in the process of evaluating CCL 3 contaminants, including erythromycin, and plans to determine whether or not regulation is required for at least five contaminants from the CCL 3 by 2013. EPA’s determination to promulgate a national primary drinking water regulation for a contaminant is made based on three criteria established under SDWA, including that the contaminant may have an adverse effect on human health. To provide information such as that needed to determine whether to regulate the contaminant, EPA has the authority to require a subset of public water systems to monitor a limited number of unregulated contaminants, which the agency has implemented through the Unregulated Contaminant Monitoring Rule (UCMR). On March 3, 2011, EPA proposed the list of contaminants (primarily from the CCL 3) to be monitored under the third UCMR (UCMR 3). Erythromycin was not included on the proposed UCMR 3 list of contaminants, because according to EPA officials, further development of an analytical method that can be used for national monitoring of erythromycin is needed. EPA officials stated that the agency is in the initial stages of development of an analytical method for a number of pharmaceuticals, including erythromycin, and will evaluate the readiness of this analytical method for future UCMR efforts. EPA officials further stated that the agency will continue to evaluate unregulated contaminants, such as erythromycin, for future CCLs and will utilize any new data that become available. EPA and USGS have conducted several studies to measure the presence of antibiotics in the environment, which results partly from their disposal. According to EPA and USGS officials, there is no specific statutory mandate requiring the agencies to collect information about the presence of antibiotics in the environment. However, from 1999 through 2007, the agencies conducted five national studies measuring the presence and concentration of certain antibiotics in streams, groundwater, untreated drinking water, sewage sludge, and wastewater effluent as part of their efforts to study emerging contaminants. (See table 5.) These studies were generally designed to determine whether certain contaminants, including antibiotics, were entering the environment and as a result, some study sites were selected based on being susceptible to contamination. For example, the study examining the presence of antibiotics, and other contaminants, in streams in 30 states was designed to determine whether these contaminants were entering the environment. Therefore, USGS purposely selected study sites susceptible to contamination by humans, industry, and agricultural wastewater. In all five studies antibiotics were found to be present. For example, erythromycin was detected in multiple samples tested in four studies and ciprofloxacin was detected in three studies. According to EPA and USGS officials, the antibiotic concentrations detected in streams, groundwater, and untreated drinking water are low relative to the maximum recommended therapeutic doses approved by FDA for most antibiotics. In contrast, antibiotics were found in relatively higher concentrations in sewage sludge. For example, the maximum concentration level of ciprofloxacin that was detected in streams or untreated drinking water sources was .03 micrograms per liter of water. In comparison, ciprofloxacin was detected in sewage sludge sampled from large publicly owned treatment plants at concentrations ranging from 74.5 to 47,000 micrograms per kilogram of sewage sludge. The maximum recommended therapeutic dose for ciprofloxacin is about 13,000 micrograms per kilogram of weight. According to USGS officials, waste from humans and domestic animals that receive antibiotics (i.e., therapeutic or subtherapeutic doses) are likely to contain antibiotics as a substantial portion of such antibiotic treatments are not fully absorbed through the body. EPA and USGS also have two ongoing studies that measure the presence of antibiotics in wastewater and drinking water. First, EPA is assessing the concentration of pharmaceuticals and other contaminants in municipal wastewater because past studies have suggested that municipal wastewater is a likely source of human pharmaceuticals entering the environment. According to EPA officials, EPA is collecting samples from 50 of the largest municipal wastewater plants in the United States and testing their treated effluents for contaminants, including 12 antibiotics. The study’s findings are expected to be made available sometime in 2012 and may help EPA develop new standards for municipal wastewater treatment, according to EPA officials. Second, EPA and USGS are collaborating on a study to measure the presence of several antibiotics (e.g., erythromycin) and other contaminants in raw and finished drinking water to better determine human exposures to these contaminants through drinking water. During 2011, researchers will take samples from between 20 and 25 drinking water treatment plants across the United States and according to EPA officials, the information will be used to inform EPA decision making about the focus of future monitoring efforts. EPA and USGS officials anticipate the study’s findings to be made available sometime in 2012. Scientific evidence gathered in our literature review shows that, at certain concentration levels, antibiotics present in the environment—in water and soil—can increase the population of resistant bacteria, due to selective pressure. Of the 15 studies we identified that examined this association, 5 examined water-related environments and 10 examined soil-related environments. Among these 15 studies, 11 provided evidence to support the association. Support for this association means that antibiotics present in these environments increased the population of resistant bacteria through selective pressure because bacteria containing resistance genes survived and multiplied. Results for the five studies examining water-related environments generally support an association between the presence of antibiotics and an increase in the population of resistant bacteria caused by selective pressure, although only one tested concentration levels of antibiotics as low as those that have been detected in national studies of U.S. streams, groundwater, and source drinking water. The results of this study were inconclusive as to whether low antibiotic concentration levels, such as levels measured at or below 1.7 micrograms per liter of water, led to an increase in the population of resistant bacteria. Among the four other studies that supported an association between the presence of antibiotics and an increase in the population of resistant bacteria, the lowest concentration level associated with an increase was 20 micrograms of oxytetracycline per liter of water—over 50 times higher than maximum antibiotic concentration levels detected in stream water across the United States. Another of these four studies found that chlortetracycline was associated with an increase in the population of resistant bacteria, but only at concentration levels over 1000 times higher than those that have been detected in streams across the United States. According to USGS officials, scientists generally agree that the population of resistant bacteria would increase in water if the concentration levels of antibiotics that are present were to reach the minimum level that is known to induce antibiotic resistance in a clinical setting. USGS officials further stated that higher concentrations of antibiotics have been found, for example, in waters near to pharmaceutical manufacturing facilities in countries outside of the United States. Results for the 10 studies examining antibiotic resistance in soil-related environments, such as soil and sediment, were more mixed, and we cannot draw comparisons between concentration levels tested in these studies and those that have been found in such environments across the United States. Seven of the 10 studies found evidence to support an association between the presence of antibiotics and an increase in the population of resistant bacteria due to selective pressure, and the association existed at all concentration levels studied. No association existed among the antibiotic concentration levels in the other 3 studies. Because national data about the presence and concentration levels of antibiotics in soil and sediment are not available, we cannot draw comparisons between concentration levels tested in these studies and those commonly found in such environments across the United States. As with water-related environments, USGS officials stated that scientists generally agree that the population of resistant bacteria would increase in soil if the concentration levels of antibiotics that are present were to reach the minimum level that is known to induce antibiotic resistance in clinical settings. USGS officials further stated that antibiotic concentration levels in soils where human and animal waste have been applied as fertilizer are likely to be directly related to the antibiotic concentration levels in these sources. Antibiotics have been widely prescribed to treat bacterial infections in humans and their use contributes to the development of antibiotic resistance, which is an increasing public health problem in the United States and worldwide. Monitoring the use of antibiotics in humans and preventing their inappropriate use, such as prescribing an antibiotic to treat a viral infection, is critically important because the use of antibiotics for any reason contributes to the development and spread of antibiotic resistance. Establishing patterns of antibiotic use is necessary for understanding current—and predicting future—patterns of antibiotic resistance. Monitoring overall antibiotic use in humans, including in inpatient and outpatient healthcare settings, is also needed to evaluate the contribution of such use—relative to other causes, such as animal use—to the overall problem of antibiotic resistance. Such information could help policymakers set priorities for actions to control the spread of antibiotic resistance. CDC is collecting data on antibiotic use and the occurrence of resistance, but the agency’s data sources have limited ability to provide accurate national estimates and do not allow it to assess associations between use and resistance. CDC does not monitor the use of antibiotics in inpatient settings—where antibiotic use is often intensive and prolonged and thus, the risk of antibiotic resistance is greater—although the agency believes such information would help it target and evaluate its own prevention efforts to reduce the occurrence of resistance. Although the agency collects annual data in the United States about the use of antibiotics in outpatient settings, the data do not allow CDC to assess geographic patterns of use in those settings. Similarly, CDC’s monitoring of antibiotic- resistant infections does not allow the agency to assess the overall problem of antibiotic resistance because of gaps in the data it collects. Without more comprehensive information about the occurrence of cases of antibiotic-resistant infections and the use of antibiotics, the agency’s ability to understand the overall scope of the public health problem, detect emerging trends, and plan and implement prevention activities is impeded. Further, the lack of comprehensive information about antibiotic-resistant infections and antibiotic use, and the most effective ways to reduce inappropriate prescribing, impedes CDC’s ability to strategically target its resources directed at reducing the occurrence of antibiotic-resistant infections. CDC is attempting to address the gaps in its data on antibiotic use in humans and on antibiotic-resistant infections by obtaining additional data, but it is not clear whether the steps it is taking will result in more comprehensive information from which the agency could assess the public health impact of antibiotic resistance. Further, it is not clear whether these steps will provide CDC with the information it needs to identify what actions are needed to reduce the occurrence of antibiotic-resistant infections. To better prevent and control the spread of antibiotic resistance, we recommend that the Director of CDC take the following two actions: Develop and implement a strategy to improve CDC’s monitoring of antibiotic use in humans, for example, by identifying available sources of antibiotic use information; and develop and implement a strategy to improve CDC’s monitoring of antibiotic-resistant infections in inpatient healthcare facilities to more accurately estimate the national occurrence of such infections. We provided a draft of this report for review to HHS, EPA, and DOI. HHS provided written comments, which are reproduced in appendix V. HHS, EPA, and DOI provided technical comments, which we incorporated as appropriate. In its written comments, HHS generally agreed with the actions we recommend it take to improve its monitoring of antibiotic use and resistance. HHS says that steps are being taken to address existing gaps in CDC’s monitoring of antibiotic use and the occurrence of antibiotic- resistant infections, and HHS noted that such monitoring is critically important in preventing the development and spread of antibiotic resistance. HHS highlighted examples of the steps CDC is taking, or plans to undertake, to address gaps in CDC’s monitoring of antibiotic use and antibiotic-resistant infections, such as a planned survey of acute care hospitals in the United States. HHS noted that other planned activities to improve the monitoring of antibiotic use and antibiotic-resistant infections are described in the revised draft Action Plan, developed by the Interagency Task Force on Antimicrobial Resistance. HHS stated that CDC believes that the successful, timely accomplishment of its planned and ongoing activities to improve monitoring will result in information that is sufficiently comprehensive for a full and complete assessment of the public health impact of antibiotic resistance, and that this assessment will provide federal agencies with appropriate information to identify necessary actions to reduce the occurrence of antibiotic-resistant infections. HHS stated that it would provide updates on its progress toward the accomplishment of its steps to improve monitoring in the 2010 annual progress report on the Action Plan, scheduled for public release this summer. HHS also commented that it has initiated the process of developing a strategic plan for preventing the emergence and spread of antibiotic-resistant infections, and a primary component of this strategic plan is the monitoring of antibiotic use and resistance. We support this effort and encourage HHS, as it develops its strategic plan, to continue to examine approaches for improving its monitoring of antibiotic use and antibiotic-resistant infections that will help provide the agency with information that is needed to more accurately estimate the national occurrence of antibiotic-resistant infections. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of the Department of Health and Human Services and the Department of the Interior, the Administrator of the Environmental Protection Agency, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To describe the scientific evidence on the development of antibiotic- resistant bacteria in the environment, we conducted a literature review. We identified literature made available since 2007 that reported scientific findings on antibiotic concentrations that induce bacteria located in the environment to become resistant as well as the ability of bacteria to spread resistance. We conducted a key word search of 39 databases, such as Elsevier Biobase and MEDLINE that included peer-reviewed journals and other periodicals to capture articles published on or between January 1, 2007, and July 8, 2010. We searched these databases for articles with key words in their title or abstract related to both antibiotic resistance and the environment, such as combinations and variations of the words “resistance,” “antibiotic,” and “environment,” and descriptive words for different environmental settings, such as “water,” “sediment,” “soil,” and “sewage.” From these sources, we identified 241 articles, publications, and reports (which we call articles) published from January 1, 2007, through July 8, 2010. Of these 241 articles, we then excluded articles that (1) were not published in English, (2) were available only in an abstract form or in books or book chapters, (3) were not peer- reviewed, (4) contained only a review of past literature, or (5) were unrelated to antibiotic resistance found in the environment such as articles that focused on the effects of antibiotic resistance found mainly in clinical settings. In total, we included 105 articles in our literature review. We supplemented the scientific findings analyzed in our literature review with contextual and background information gathered from articles that were identified as a result of our interviews with officials from the Environmental Protection Agency and the United States Geological Survey. Bacteria are single-celled organisms that live in water, soil, and in the bodies of humans, animals, and plants. Bacteria compete with each other for resources, such as nutrients, oxygen, and space, and those that do not compete successfully will not survive. Most bacteria that are present in humans, such as those found on the skin and in the intestines, are harmless because of the protective effects of the human immune system, and a few bacteria are beneficial. However, some bacteria are capable of causing disease. For example, Escherichia coli O157—which can be found in the feces of animals, such as cattle, and can transfer to people through contaminated undercooked meat—produce a toxin that causes severe stomach and bowel disorders, and death in some cases. In addition, the same bacteria that may cause disease in one individual may not cause disease in another. For example, Streptococcus pneumoniae is a bacterium that is often found in the noses and throats of healthy persons without causing disease, but it can also cause mild illness, such as sinus infections, as well as life-threatening infections such as meningitis. Furthermore, when the immune system is weakened, infection may be caused by certain bacteria that would not generally result in an infection in a healthy human. Like other living things, as bacteria grow and multiply, they also evolve and adapt to changes in their surroundings. Bacteria adapt to their surroundings through selective pressure, which is created by, among other things, the presence of antibiotics. Selective pressure means that when an antibiotic is introduced into a bacterial environment, some bacteria will be killed by the antibiotic while other bacteria will survive. Bacteria are able to survive because they have certain genetic material that is coded for resistance—allowing them to avoid the effects of the antibiotic. The surviving bacteria that are resistant to antibiotics will multiply and quickly become the dominant bacterial type. Bacteria that are susceptible to the effects of antibiotics may become resistant to such antibiotics after acquiring resistant genetic material from bacteria that are resistant through horizontal gene transfer. Horizontal gene transfer is the movement of genetic material between bacteria, and can occur within a species of bacteria and can sometimes occur between certain species of bacteria. Close proximity between bacteria, which allows certain genetic material to be shared, can facilitate gene transfer. The movement of antibiotic-resistant bacteria around the world is accelerated because of international travel and global trade. Individuals can contract bacterial strains—that is, distinct types of bacteria—that are resistant to antibiotics abroad during travel, whether as active infections or as unaffected carriers, and then spread such strains to others at home. The bacterial strains in different parts of the world may also contain different resistance genes than bacterial strains found domestically. For example, in 2010, the Centers for Disease Control and Prevention reported that three bacterial strains included a resistance gene identified for the first time in the United States. The emergence of the resistance gene was traced to patients who had received recent medical care in India. Further, international trade of food and livestock may accelerate the movement of antibiotic-resistant bacteria because food and livestock also carry resistant bacterial strains that can be contracted by humans through consumption. To determine whether bacteria are resistant, tests are performed that measure the susceptibility of pathogenic bacteria to particular antibiotics. The test results can predict the success or failure of an antibiotic treatment, and thus, guide healthcare providers’ choice of antibiotics to treat bacterial infections. The test results include a numeric value, which is then interpreted according to established ranges. For example, a value may be categorized as ‘resistant,’ meaning that the pathogenic bacterium is not inhibited by the concentration of the antibiotic that usually results in growth inhibition. Examples of how surveillance data were used group A and group B Streptococcus; Neisseria meningitidis; Streptococcus pneumoniae; methicillin- resistant Staphylococcus aureus (MRSA) ABCs data were used to show that rates of invasive pneumococcal infections, including antibiotic-resistant infections among children and adults, have declined since a pneumococcal conjugate vaccine was introduced for children in 2000. ABCs data have also shown a decline in the incidence of pneumococcal meningitis resistant to antibiotics. ABCs data on MRSA, collected between 2005 and 2008, were used to identify the genetic makeup of MRSA strains showing unusual patterns of resistance. This information provided the Centers for Disease Control and Prevention (CDC) with evidence that mechanisms of resistance in MRSA were being transferred from healthcare-associated to community-associated strains. Based on GISP data, CDC announced in 2007 that fluoroquinolones were no longer recommended to treat gonorrhea because of antibiotic resistance and that the recommended treatment for gonorrhea was limited to only cephalosporin antibiotics. Neisseria gonorrhoeae isolates collected through GISP have been used to support research on the mechanisms used to resist the effects of antibiotics, according to a CDC official. NARMS: EB data were used in 2005 to support the Food and Drug Administration’s (FDA) withdrawal of approval for the use of enrofloxacin in chickens and turkeys. Enrofloxacin, a fluoroquinolone, marketed under the trade name Baytril, had been approved for use in poultry production. In September 2005, FDA withdrew its approval because of concerns about the spread of fluoroquinolone-resistant Campylobacter from poultry to humans. NARMS: EB data from 1996-2006 were used to identify mechanisms of resistance to cephalosporins among specific types of Salmonella. Participating facilities have used NHSN data to assess their own healthcare-associated infection (HAI) rates, by comparing their rates with national rates. CDC also compiled 2006-2007 data on antibiotic resistance across participating facilities and reported, for example, that as many as 16 percent of all HAIs observed in NHSN were associated with nine multidrug-resistant bacteria, such as MRSA. CDC has determined that NNDSS data are likely to be used to assess the impact of a vaccine that was approved in 2010 to prevent additional strains of Streptococcus pneumoniae. CDC receives information on each newly reported case of tuberculosis (TB) in the United States. In 2010, after expanding the NTSS data collection with the TB Genotyping Information Management System, CDC officials used genotypes identified with the system to assist an investigation of a TB outbreak among healthcare workers. As a result of the investigation, the probable source for the TB outbreak was identified. Since 2009, CDC has monitored Clostridium difficile infections in healthcare and community settings through EIP (as part of its Healthcare Associated Infections Surveillance). CDC officials stated that these data complement the Clostridium difficile data that are captured through the National Healthcare Safety Network and will, among other things, inform vaccine development. Topical antiseptics are products that are used to reduce the risk of infection by killing or inhibiting the growth of microorganisms, such as bacteria, on the skin. Topical antiseptic products are diverse, and include those targeted for healthcare settings, such as surgical hand scrubs and patient preoperative skin preparations; products targeted to consumers for general body cleansing include antibacterial soaps; and products specifically intended for use by food handlers. Topical antiseptics contain a variety of active ingredients; for example, triclosan and triclocarban are commonly used in antibacterial liquid and bar soaps, while alcohol is used in leave-on handwashes. Because antiseptics are intended for use in or on humans or animals, they are considered drugs and are approved and regulated as nonprescription drugs by the Food and Drug Administration (FDA) under the Federal Food, Drug, and Cosmetic Act. There are concerns by public officials, and others, about the possibility that the use of, or exposure to, topical antiseptics causes antibiotic resistance in bacteria. This process is called cross-resistance. FDA has conducted a review of the scientific literature regarding the relationship between exposure to active ingredients in topical antiseptics—including triclosan or triclocarban—and cross-resistance. According to the available scientific evidence that FDA has reviewed, bacteria are able to develop resistance to both antiseptics and antibiotics in the laboratory setting, but the relationship outside of the laboratory setting is not clear. For example, a laboratory study has shown that when certain strains of the bacteria Escherichia coli (E. coli) are exposed to triclosan, the E. coli not only acquire a high level of resistance to triclosan, but also demonstrate cross-resistance to various antibiotics, such as erythromycin and tetracycline. However, a study that examined household use of certain antiseptic products did not show an association between their use and the development of antibiotic resistance. According to FDA, the possibility that bacteria can develop cross- resistance to antibiotics from exposure to antiseptics warrants further evaluation. FDA will seek additional data regarding the safety of topical antiseptic products, for example, on the effects of antiseptics on cross- resistance, when it issues a proposed rule to amend the current monograph for antiseptic drug products. FDA officials told us that they expect the proposed rule to be published for public comment sometime in 2011. The Environmental Protection Agency (EPA) and the United States Geological Survey (USGS) conducted five national studies between 1999 and 2007 that measured for the presence of the antiseptic active ingredients triclosan and triclocarban in the environment. These studies tested for the presence and concentration of the antiseptic active ingredients along with other contaminants including antibiotics, in streams, groundwater, untreated drinking water, sewage sludge, and wastewater effluent. (See table 6.) Each of the studies measured for the presence of triclosan, and the study involving sewage sludge also tested for triclocarban. Triclosan was found to be present in 94 percent of sewage sludge samples, 100 percent of wastewater effluent samples, and 57.6 percent of stream samples tested from sites across the United States. It was also detected in 14.9 percent of groundwater samples and 8.1 percent of untreated drinking water samples. Triclocarban was found to be present in all sewage sludge samples taken from wastewater treatment plants located across the United States. In addition to the contact named above, Robert Copeland, Assistant Director; Elizabeth Beardsley; Pamela Dooley; Cathy Hamann; Toni Harrison; Elise Pressma; and Hemi Tewarson made key contributions to this report.
Infections that were once treatable have become more difficult to treat because of antibiotic resistance. Resistance occurs naturally but is accelerated by inappropriate antibiotic use in people, among other things. Questions have been raised about whether agencies such as the Department of Health and Human Services (HHS) have adequately assessed the effects of antibiotic use and disposal on resistance in humans. GAO was asked to (1) describe federal efforts to quantify the amount of antibiotics produced, (2) evaluate HHS's monitoring of antibiotic use and efforts to promote appropriate use, (3) examine HHS's monitoring of antibiotic-resistant infections, and (4) describe federal efforts to monitor antibiotic disposal and antibiotics in the environment, and describe research on antibiotics in the development of resistance in the environment. GAO reviewed documents and interviewed officials, conducted a literature review, and analyzed antibiotic sales data. Federal agencies do not routinely quantify the amount of antibiotics that are produced in the United States for human use. However, sales data can be used as an estimate of production, and these show that over 7 million pounds of antibiotics were sold for human use in 2009. Most of the antibiotics that were sold have common characteristics, such as belonging to the same five antibiotic classes. The class of penicillins was the largest group of antibiotics sold for human use in 2009, representing about 45 percent of antibiotics sold. HHS performs limited monitoring of antibiotic use in humans and has implemented efforts to promote their appropriate use, but gaps in data on use will remain despite efforts to improve monitoring. Although HHS's Centers for Disease Control and Prevention (CDC) monitors use in outpatient healthcare settings, there are gaps in data on inpatient antibiotic use and geographic patterns of use. CDC is taking steps to improve its monitoring, but gaps such as information about overall antibiotic use will remain. Because use contributes to resistance, more complete information could help policymakers determine what portion of antibiotic resistance is attributed to human antibiotic use, and set priorities for action to control the spread of resistance. CDC's Get Smart program promotes appropriate antibiotic use; CDC has observed declines in inappropriate prescribing, but it is unclear to what extent the declines were due to the program or to other factors. CDC's program has been complemented by efforts by the National Institutes of Health and the Food and Drug Administration, such as supporting studies to develop tests to quickly diagnose bacterial infections. Gaps in CDC's monitoring of antibiotic-resistant infections limit the agency's ability to assess the overall problem of antibiotic resistance. There are data gaps in monitoring of such infections that occur in healthcare facilities; CDC does not collect data on all types of resistant infections to make facilitywide estimates and the agency's information is not nationally representative. CDC can provide accurate national estimates for certain resistant infections that develop in the community, including tuberculosis. Although CDC is taking steps to improve its monitoring, these efforts will not allow CDC to accurately assess the overall problem of antibiotic resistance because they do not fill gaps in information. Without more comprehensive data, CDC's ability to assess the overall scope of the public health problem and plan and implement preventive activities will be impeded. Federal agencies do not monitor the disposal of most antibiotics intended for human use, but they have detected them, as well as antibiotics for animal use, in the environment, which results partly from their disposal. EPA and DOI's United States Geological Survey have examined the presence of certain antibiotics in environmental settings such as streams. Studies conducted by scientists have found that antibiotics present in the environment at certain concentrations can increase the population of resistant bacteria. To better control the spread of resistance, GAO recommends that CDC develop and implement strategies to improve its monitoring of (1) antibiotic use and (2) antibiotic-resistant infections. HHS generally agreed with our recommendations. HHS, the Environmental Protection Agency (EPA) and the Department of the Interior (DOI) provided technical comments, which we incorporated as appropriate.
DOD’s supply chain is a global network that provides materiel, services, and equipment to the joint force. In February 2015, we reported that DOD had been experiencing weaknesses in the management of its supply chain, particularly in the following areas: inventory management, materiel distribution, and asset visibility. Regarding asset visibility, DOD has had weaknesses in maintaining visibility of supplies, such as problems with inadequate radio-frequency identification information to track all cargo movements. Additionally, in February 2015, we reported on progress DOD had made in addressing weaknesses in its asset visibility, including developing its 2014 Strategy. DOD has focused on improving asset visibility since the 1990s, and its efforts have evolved over time, as shown in figure 1. The 2015 Strategy states that the department introduced automatic identification technology capabilities to improve its ability to track assets. Since we added asset visibility to the high risk list in 2005, we have reported that DOD has made a great deal of progress in improving asset visibility. The 2014 Strategy notes that for more than 25 years, the department has been using technologies, starting with linear bar codes and progressing to a variety of more advanced technologies, with the goal of improving asset visibility. Specifically, the Strategies state that, based on lessons learned from years of war in Iraq and Afghanistan, the department introduced technology capabilities to improve its ability to track assets as they progress from posts, camps, and stations. Additionally, the 2015 Strategy states that DOD has made significant progress toward improving asset visibility, but opportunities for greater DOD-wide integration still exist. DOD has issued two strategies to guide its efforts in improving asset visibility: 2014 Strategy: In January 2014, the department issued its Strategy for Improving DOD Asset Visibility. The 2014 Strategy creates a framework whereby the components work collaboratively to identify improvement opportunities and capability gaps and to leverage technology capabilities, such as radio frequency identification. These capabilities aid in providing timely, accurate, and actionable information about the location, quantity, and status of assets. The 2014 Strategy identified 22 initiatives developed by the components that were intended to improve asset visibility. OSD officials stated that an initiative is conducted in accordance with component-level policy and procedures and can either be for a single component or for potential improvement throughout DOD. According to OSD officials, DOD components develop asset visibility initiatives, and these initiatives may be identified by the Asset Visibility Working Group or by components for inclusion in the Strategies. 2015 Strategy: In October 2015, DOD issued its update to the 2014 Strategy. The 2015 Strategy outlined an additional 8 initiatives developed by the components to improve asset visibility. According to OSD officials, they plan to issue an update to the 2015 Strategy, but the release date for this update has not been determined. These officials stated that the update to the 2015 Strategy will outline about 10 new initiatives. As we reported in January 2015, DOD has taken steps to monitor the asset visibility initiatives. Specifically, DOD has established a structure for overseeing and coordinating efforts to improve asset visibility. This structure includes the Asset Visibility Working Group, which according to the Strategies is responsible for monitoring the execution of the initiatives. Additionally, the components are designated as the offices of primary responsibility to ensure the successful execution of their initiatives, including developing cost estimates and collecting performance data. Working Group members include representatives from OSD and the components—Joint Staff, the Defense Logistics Agency, U.S. Transportation Command, and each of the military services. The components submit quarterly status reports to the Working Group about their initiatives—including progress made on implementation milestones, return on investment, and resources and funding. Additionally, as documented in the minutes from its May 2016 Asset Visibility Working Group meeting, DOD uses an electronic repository that includes information about the initiatives. The 2015 Strategy describes a process in which the Asset Visibility Working Group, among other things, reviews and concurs that an initiative has met its performance objectives. The Asset Visibility Working Group files an after-action report, which is added to the status report, for completed initiatives; this after-action report is to include performance measures used to assess the success of the initiative, challenges associated with implementing the initiative, and any lessons learned from the initiative. For example, an after-action report for the U.S. Transportation Command (U.S. TRANSCOM) active radio frequency identification (RFID) migration initiative stated that U.S. TRANSCOM had successfully tracked the use of old and new active RFID tags on military assets and updated an active RFID infrastructure to accommodate the new tags. DOD components have identified performance measures for the 8 initiatives we reviewed, but the measures do not generally include the key attributes of successful performance measures (i.e., the measures were not generally clear, quantifiable, objective, and reliable). We also found that after-action reports for some initiatives did not always include information on the performance measures and therefore prevent DOD from effectively evaluating the success of the initiatives in achieving the goals and objectives described in the Strategies. DOD components have identified at least one performance measure for each of the 8 initiatives we examined. These initiatives are described in table 1. (For more details on each of the 8 initiatives, see appendix II.) DOD’s Strategies direct that expected outcomes or key performance indicators (which we refer to as performance measures) be identified for assessing the implementation of each initiative. The 2015 Strategy notes that these performance measures enable groups, such as the Asset Visibility Working Group and the Supply Chain Executive Steering Committee—senior-level officials responsible for overseeing asset visibility improvement efforts—to monitor progress toward the implementation of an initiative and to monitor the extent to which the initiative has improved asset visibility in support of the Strategy’s goals and objectives. For example, one of the performance measures for a U.S. TRANSCOM initiative on the migration to a new active radio frequency identification (RFID) tag is to track the use of old and new active RFID tags on military assets. Additionally, one of the performance measures for the Defense Logistics Agency’s (DLA) initiative on passive RFID technology for clothing and textiles is to track the time it takes to issue new uniforms to military personnel. The 2015 Strategy also notes that the performance measures are reviewed before an initiative is closed by the Asset Visibility Working Group. Our prior work on performance measurement has identified several important attributes that performance measures should include if they are to be effective in monitoring progress and determining how well programs are achieving their goals. (See table 2 for a list of selected key attributes.) Additionally, Standards for Internal Control in the Federal Government emphasizes using performance measures to assess performance over time. We have previously reported that by tracking and developing a performance baseline for all performance measures, agencies can better evaluate whether they are making progress and their goals are being achieved. Based on an analysis of the 8 initiatives we reviewed, we found that these performance measures did not generally include the key attributes of successful performance measures. Moreover, DOD’s Strategies lack sufficient direction on how components are to develop measures for these initiatives that would ensure that the performance measures developed include the key attributes for successful measures. This hinders DOD’s ability to ensure that effective measures are developed which will allow it to monitor the performance of the individual initiatives and whether the initiatives are likely to achieve the goals and objectives of the Strategies. We found that some of the performance measures for the 8 initiatives we reviewed included the key attributes of successful performance measures, such as linkage to goals and objectives in the Strategies. However, the measures for most of the initiatives did not have many of the key attributes of successful performance measures. As shown in table 3, for three initiatives there were no clearly identified performance measures; for five there were no measurable targets to allow for easier comparison with the initiatives’ actual performance; for five the measures were not objective; for five the measures were not reliable; for six there were no baseline and trend data associated with the measures; and for three the performance measures were not linked to the goals and objectives of the Strategies. A detailed discussion of our assessment of the performance measures for each key attribute follows: Measures for 5 of the 8 initiatives partially included the key attribute of “clarity.” For example, a performance measure for a Defense Logistics Agency initiative was to reduce the time required to issue uniforms by improving cycle times and reducing customer wait time. We identified “to reduce the time required to issue uniforms” as the name of the measure. However, the definition we identified for this measure, which is to improve cycle times and reduce customer wait time, did not include the methodology for computing the measure. Therefore, for the clarity attribute, we could not determine if the definition of this measure was consistent with the methodology used to calculate it. We reported in September 2015 that if the name and definition of the performance measure are not consistent with the methodology used to calculate it, data may be confusing and misleading to the component. For 3 of the 8 initiatives the performance measures were not clearly stated. For example, a performance measure for an Army initiative was to expand current capabilities by accessing data through a defense casualty system and integrate reporting and tracking into one application. We found that there was an overall description of the initiative, but it did not include a name or definition for the measure or a methodology for calculating it. 2. Measurable Target: Measures for 3 of the 8 initiatives fully included the key attribute of measurable targets. For example, a performance measure for a Joint Staff initiative is to have 100 percent visibility of condition codes for non-munitions inventory. Measures for 5 of the 8 initiatives did not identify a measurable target. For example, a performance measure for a Marine Corps initiative is to increase non-nodal visibility and the delivery status of materiel in transit within an area of responsibility, but the component did not provide a quantifiable goal or other measure that permits expected performance to be compared with actual results so that actual progress can be assessed. Measures for 3 of the 8 initiatives partially included the key attribute of objectivity. For example, the performance measures for a Navy initiative indicated what is to be observed (timeliness, accuracy, and completeness), but the measures did not specify what population and time frames were to be observed. Measures for 5 of the 8 initiatives did not include the key attribute of objectivity. For example, the performance measures for an Army initiative did not indicate what is to be observed, in which population, and in what time frame. Measures for 3 of the 8 initiatives partially included the key attribute of reliability. For example, some of the performance measures for a Navy initiative included data quality control processes to verify or validate information such as automated or manual reviews and the frequency of reviews. However, the Navy did not specify how often it would perform these reviews. Measures for 5 of 8 initiatives did not include the key attribute of reliability. For example, the performance measures for an Army initiative did not include a name for the measures, definitions for these measures, or methodologies for calculating them. Therefore, we could not determine whether the measures would produce the same results under similar conditions. 5. Baseline and Trend data: Measures for 2 of 8 initiatives partially included the key attribute of baseline and trend data. For example, a Joint Staff initiative included a baseline (e.g., improve the visibility of condition codes of non-munitions assets in the Global Combat Support System – Joint (GCSS-J) from 48 percent to 100 percent), but it did not include trend data. Measures for 6 of 8 initiatives did not include the key attribute of baseline and trend data. For example, the performance measures for a U.S. TRANSCOM initiative for implementing transportation tracking numbers did not include baseline and trend data to identify, monitor, and report changes in performance. Measures for 5 of 8 initiatives fully included the key attribute of linkage. For example, the performance measures for the Joint Staff initiative, intended to maximize the visibility of the condition codes of non-munitions assets in GCSS-J to support joint logistics planning, are linked to the 2015 Strategy’s goals of: improving visibility into customer materiel requirements and available resources; o enhancing visibility of assets in transit, in storage, in process, and in theater; and o enabling an integrated accessible authoritative data set. Measures for 3 of the 8 initiatives did not include the key attribute of linkage because they were not aligned with agency-wide goals and mission and were not clearly communicated throughout the organization. These initiatives were identified in the 2014 Strategy and the descriptions of the initiatives did not specify which of the goals and objectives they were intended to support. We reported in January 2015 that the 2014 Strategy did not direct that the performance measures developed for the initiatives link to the goals or objectives in the 2014 Strategy, and we found that it was not clear whether the measures linked to the Strategy’s goals and objectives. Therefore, we recommended that DOD ensure that the linkage between the performance measures for the individual initiatives and the goals and objectives outlined in the 2014 Strategy be clear. DOD concurred with our recommendation and in its 2015 Strategy linked each initiative to the goals and objectives. The deficiencies that we identified in the performance measures can be linked to the fact that the Strategies have not included complete direction on the key attributes of successful performance measures. The 2014 Strategy provided direction on the types of expected outcomes and key performance indicators. For example, an expected outcome is to increase supply chain performance and the key performance indicator is to improve customer wait time. However, when OSD updated the 2014 Strategy it did not include in the 2015 Strategy an example of the types of expected outcomes and key performance indicators for components to use when developing performance measures. The lack of direction on successful performance measures may have resulted in measures that lacked key attributes, such as clarity, measurable target, objectivity, reliability, baseline and trend data, and linkage, as we previously discussed. While OSD officials stated that they believed the performance measures for the selected initiatives were sufficient to report on the status of the initiatives, our review of these measures determined that they could not be used to effectively assess the performance of the initiatives to improve asset visibility. Without sufficient direction in subsequent updates to the Strategy on developing successful performance measures, DOD has limited assurance that the components are developing measures that can be used to determine how the department is progressing toward achieving its goals and objectives related to improving asset visibility. As described in the 2015 Strategy, the Asset Visibility Working Group and the component review the performance of the initiatives during implementation. As we reported in January 2015, the components report quarterly to the Asset Visibility Working Group on the status of their initiatives—including progress made on implementation milestones, return on investment, and resources and funding. We found that DOD components had included performance measures in their quarterly status reports for the 8 initiatives we reviewed. However, DOD components have not always included performance measures to assess the success of their initiatives in after-action reports, which are added to the status report for completed initiatives. To close an initiative, the components responsible for the initiative request closure and the Asset Visibility Working Group files an after-action report, which serves as a closure document and permanent record of an initiative’s accomplishments. According to the 2015 Strategy, the after-action report should include information on the objectives met, problems or gaps resolved, challenges associated with implementing the initiative, any lessons learned from the initiative, and measures of success obtained. The Asset Visibility Working Group approves the closure of initiatives when the components have completed or canceled the initiatives and updated the status report section called the after-action report. Once an initiative is closed, according to DOD officials, the Working Group no longer monitors the initiative, but the components may continue to monitor it. According to these DOD officials, DOD components may update information provided to the Asset Visibility Working Group or the Working Group may request additional information after the initiative is closed, especially when implementation affects multiple components. We found that the after-action reports did not always include all of the information necessary. According to our review of after-action reports, as of October 2016, the Asset Visibility Working Group had closed 5 of the 8 asset visibility initiatives that we examined. Our review of the after- action reports for the 5 closed initiatives found the following: Two reports included information on whether the performance measures—also referred to as measures of success—for the initiative had been achieved. Three reports did not follow the format identified in the 2015 Strategy, and we could not determine whether the intent and outcomes based on performance measures for the initiative had been achieved. We also reviewed after-action reports for the remaining 15 initiatives that were closed and found the following: Seven reports included information on whether the performance measures for the initiative had been achieved. Five reports did not include information on performance measures, because these measures were not a factor in measuring the success of the initiative. One report was not completed by the component. Two reports did not follow the format identified in the 2015 Strategy, and we could not determine whether the intent and outcomes based on performance measures for the initiative had been achieved. Based on our analysis, it appears that while the Asset Visibility Working Group closed 20 initiatives, it generally did not have information related to performance measures to assess the progress of these initiatives when evaluating and closing them. Specifically, the after-action reports for 11 of 20 initiatives did not include performance measures that showed whether the initiative had met its intended outcomes in support of the department’s Strategies. Officials from the Asset Visibility Working Group stated that they generally relied on the opinion of the component’s subject matter experts, who are familiar with each initiative’s day-to-day performance, to assess the progress of these initiatives. While including the input of the component’s subject matter experts for the initiative in the decision to close the initiative is important, without incorporating after-action reports information relating to performance measures into the information considered by the Asset Visibility Working Group, DOD does not have assurance that closed initiatives have been fully assessed and whether they have resulted in achieving the goals and objectives of the Strategies. DOD has fully met three of our criteria for removal from the High Risk List by improving leadership commitment, capacity, and its corrective action plan, and it has partially met the criteria to monitor the implementation of the initiatives and demonstrate progress in improving asset visibility. Table 4 includes a description of the criteria and our assessment of DOD’s progress in addressing each of them. DOD Continues to Fully Meet Our High-Risk Criterion for Leadership Commitment Our high-risk criterion for leadership commitment calls for leadership oversight and involvement. DOD has taken steps to address asset visibility challenges, and we found—as we had in our February 2015 high- risk report—that DOD has fully met this criterion. Senior leaders at the department have continued to demonstrate commitment to addressing the department’s asset visibility challenges, as evidenced by the issuance of DOD’s 2014 and 2015 Strategies. The Office of the Deputy Assistant Secretary of Defense for Supply Chain Integration provides department- wide oversight for development, coordination, approval, and implementation of the Strategies and reviews the implementation of the initiatives. Also, senior leadership commitment is evident in its involvement in asset visibility improvement efforts, including groups such as the Supply Chain Executive Steering Committee—a group of senior- level officials responsible for overseeing asset visibility improvement efforts—and the Asset Visibility Working Group—a group of officials that includes representatives from the components and other government agencies, as needed. The Asset Visibility Working Group identifies opportunities for improvement and monitors the implementation of initiatives. Sustained leadership commitment will be critical moving forward, as the department continues to implement its Strategies to improve asset visibility and associated asset visibility initiatives. DOD Has Fully Met Our High-Risk Criterion for Capacity Our high-risk criterion for capacity calls for agencies to demonstrate that they have the people and other resources needed to resolve risks in the high-risk area. In our October 2014 management letter to a senior OSD official and our January 2015 and February 2015 reports, we noted that resources and investments should be discussed in a comprehensive strategic plan, to include the costs to execute the plan and the sources and types of resources and investments—including skills, human capital, technology, information and other resources—required to meet established goals and objectives. DOD has demonstrated that it has the capacity—personnel and resources—to improve asset visibility. For example, as we previously noted, the department had established the Asset Visibility Working Group that is responsible for identifying opportunities for improvement and monitoring the implementation of initiatives. The Working Group includes representatives from OSD and the components—Joint Staff, the Defense Logistics Agency, U.S. Transportation Command, and each of the military services. Furthermore, DOD’s 2015 Strategy called for the components to consider items such as manpower, materiel, and sustainment costs when documenting cost estimates for the initiatives in the Strategy, as we recommended in our January 2015 and February 2015 reports. For example, DOD identified and broke down estimated costs of $10 million for implementing an initiative to track Air Force aircraft and other assets from fiscal years 2015 through 2018 by specifying that $1.2 million was for manpower, $7.4 million for sustainment, and $1.4 million for one-time costs associated with the consolidation of a server for the initiative. Additionally, DOD broke down estimated costs of $465,000 for implementing an initiative to track Marine Corps assets from fiscal years 2013 through 2015 by specifying $400,000 for manpower and $65,000 for materials. However, in December 2015 we found that the 2015 Strategy included three initiatives that did not include cost estimates. To address this issue, in December 2016, a DOD official provided an abstract from the draft update to the 2015 Strategy that provides additional direction on how to explain and document cases where the funding for the initiatives is embedded within overall program funding. The draft update notes that there may be instances where asset visibility improvements are embedded within a larger program, making it impossible or cost prohibitive to isolate the cost associated with specific asset visibility improvements. In these cases, the document outlining the initiatives will indicate that cost information is not available and why. However, if at some point during implementation some or all costs are identified, information about the initiative will be updated. According to OSD officials, DOD plans to issue the update to the 2015 Strategy, but a release date has not been determined. DOD Has Fully Met Our High-Risk Criterion for a Corrective Action Plan Our high-risk criterion for a corrective action plan calls for agencies to define the root causes of problems and related solutions and to include steps necessary to implement the solutions. The Fiscal Year 2014 National Defense Authorization Act (NDAA) required DOD to submit to Congress a comprehensive strategy and implementation plans for improving asset tracking and in-transit visibility. The Fiscal Year 2014 NDAA, among other things, called for DOD to include in its strategy and plans elements such as goals and objectives for implementing the strategy. The Fiscal Year NDAA also included a provision that we assess the extent to which DOD’s strategy and accompanying implementation plans include the statutory elements. In January 2014, DOD issued its Strategy for Improving DOD Asset Visibility and accompanying implementation plans that outline initiatives intended to improve asset visibility. DOD updated its 2014 Strategy and plans in October 2015. The 2014 and 2015 Strategies define the root causes of problems associated with asset visibility and related solutions (i.e., the initiatives). In our October 2014 management letter to a senior OSD official and our January and February 2015 reports, we found that while the 2014 Strategy and accompanying plans serve as a corrective action plan, there was not a clear link between the initiatives and the Strategy’s goals and objectives. We recommended that DOD clearly specify the linkage between the goals and objectives in the Strategy and the initiatives intended to implement the Strategy. DOD implemented our recommendation in its 2015 Strategy, which includes matrixes that link each of DOD’s ongoing initiatives intended to implement the Strategy to the Strategy’s overarching goals and objectives. DOD also added 8 initiatives to its 2015 Strategy and linked each of them to the Strategy’s overarching goals and objectives. DOD Has Taken Steps to Monitor the Status of Initiatives, but Its Performance Measures Could Not Always Be Used to Track Progress Our high-risk criterion on monitoring calls for agencies to institute a program to monitor and independently validate the effectiveness and sustainability of corrective measures, for example, through performance measures. DOD has taken steps to monitor the status of asset visibility initiatives, but we found that it has only partially met our high-risk criterion for monitoring. In our February 2015 High-Risk update, we referred to a 2013 report in which we had found that DOD lacked a formal, central mechanism to monitor the status of improvements or fully track the resources allocated to them. We also reported that while DOD’s draft 2014 Strategy included overarching goals and objectives that addressed the overall results desired from implementation of the Strategy, it only partially included performance measures, which are necessary to enable monitoring of progress. Since February 2015, DOD has taken some steps to improve its monitoring of its improvement efforts. As noted in the 2015 Strategy, DOD has described and implemented a process that tasks the Asset Visibility Working Group to review the performance of the component’s initiatives during implementation on a quarterly basis, among other things. The Working Group uses status reports from the DOD components that include information on resources, funding, and progress made toward implementation milestones. DOD also identified performance measures for its asset visibility initiatives. However, as previously discussed, the measures for the 8 initiatives we reviewed were not generally clear, quantifiable (i.e., lacked measurable targets and baseline and trend data), objective, and reliable. Measures that are clear, quantifiable, objective, and reliable can help managers better monitor progress, including determining how well they are achieving their goals and identifying areas for improvement, if needed. In December 2016, a DOD official provided an abstract from the draft update to the 2015 Strategy that noted that detailed metrics data will be collected and reviewed at the level appropriate for the initiative. High-level summary metrics information will be provided to the Working Group in updates to the plan outlining the initiatives. The extent to which this planned change will affect the development of clear, quantifiable, objective, and reliable performance measures remains to be determined. Additionally, as discussed previously, while the Asset Visibility Working Group has closed 20 initiatives, it generally did not have information related to performance measures to assess the progress of these initiatives. Specifically, after-action reports from 11 of 20 initiatives— which are added to the status reports for completed initiatives—did not include performance measures that showed whether the initiative had met its intended outcomes in support of the department’s Strategies. Without improved performance measures and information to support that progress has been made, DOD may not be able to monitor asset visibility initiatives. DOD Has Demonstrated Some Progress but Cannot Demonstrate that Its Initiatives Have Resulted in Measurable Outcomes and Improvements for Asset Visibility Our high-risk criterion for demonstrated progress calls for agencies to demonstrate progress in implementing corrective measures and resolving the high-risk area. DOD has made progress by developing and implementing its Strategies for improving asset visibility. In our October 2014 management letter to a senior OSD official and our January and February 2015 reports, we noted that in order to demonstrate progress in having implemented corrective measures, DOD should continue the implementation of the initiatives identified in the Strategy, refining them over time as appropriate. DOD reports that it has closed or will no longer monitor the status of 20 of the 27 initiatives and continues to monitor the remaining 7 initiatives. Additionally, in October 2016, DOD officials stated that they plan to add about 10 new initiatives in the update to the 2015 Strategy. For example, the U.S. Transportation Command’s new initiative, Military Service Air Manifesting Capability, is expected to promote timely, accurate, and complete in-transit visibility and effective knowledge sharing to enhance understanding of the operational environment. OSD officials have not yet determined a date for the release of the update to the 2015 Strategy. As discussed previously, we found that DOD cannot use the performance measures associated with the initiatives to demonstrate progress, because the measures are not generally clear, quantifiable (i.e., lack measurable targets and baseline and trend data), objective, and reliable. Additionally, we found that DOD has not taken steps to consistently incorporate information on an initiative’s performance measures in closure reports, such as after-action reports, in order to demonstrate the extent to which progress has been made toward achieving the intended outcomes of the individual initiatives and the overall goals and objectives of the Strategies. Without clear, quantifiable, objective, and reliable performance measures and information to support that progress has been made, DOD may not be able to demonstrate that implementation of these initiatives has resulted in measurable outcomes and progress toward achieving the goals and objectives in the Strategies. Also, DOD will be limited in its ability to demonstrate sustained progress in implementing corrective actions and resolving the high-risk area. DOD has taken some positive steps to address weaknesses in asset visibility. Long-standing management weaknesses related to DOD’s asset visibility functions hinder the department’s ability to provide spare parts, food, fuel, and other critical supplies in support of U.S. military forces. We previously reported on several actions that we believe DOD should take in order to mitigate or resolve long-standing weaknesses in asset visibility and meet the criteria for removing asset visibility from the High Risk List. We believe that DOD has taken the actions necessary to meet the capacity and action plan criteria by providing additional direction to the components on formulating cost estimates for the asset visibility initiatives. Additionally, DOD linked the 2015 Strategy’s goals and objectives with the specific initiatives intended to implement the Strategy. However, DOD’s efforts to monitor initiatives show that the performance measures DOD components currently use to assess these initiatives lack some of the key attributes of successful performance measures that we have identified. To the extent that these measures lack the key attributes of successful performance measures, they limit DOD’s ability to effectively monitor the implementation of the initiatives and assess the effect of the initiatives on the overall objectives and goals of the Strategies. Developing clear, quantifiable, objective, and reliable performance measures can help DOD better assess department-wide progress against the Strategies’ goals and clarify what additional steps need to be taken to enable decision makers to exercise effective oversight. An important step in determining what effect, if any, the asset visibility initiatives are having on the achievement of the Strategies’ goals and objectives will be to develop sound performance measures and incorporate information about these measures into the after-action reports when evaluating and closing initiatives. Until DOD components demonstrate that implementation of the initiatives will result in measurable outcomes and progress toward achieving the goals and objectives of the Strategies, DOD may be limited in its ability to demonstrate progress in implementing corrective actions and resolving the high-risk area. Once these actions are taken, DOD will be better positioned to demonstrate the sustainable progress needed in its approach to meeting the criteria for removing asset visibility from our High Risk List. We are making two recommendations to help improve DOD’s asset visibility. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Logistics and Materiel Readiness, in collaboration with the Director, Defense Logistics Agency; the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; the Commander of the United States Transportation Command; and the Chairman of the Joint Chiefs of Staff, to: use the key attributes of successful performance measures—including clarity, measurable target, objectivity, reliability, baseline and trend data, and linkage—in refining the performance measures in subsequent updates to the Strategy to improve DOD’s efforts to monitor asset visibility initiatives; and incorporate into after-action reports information relating to performance measures for the asset visibility initiatives when evaluating and closing these initiatives to ensure that implemented initiatives will achieve the goals and objectives in the Strategies. In its written comments on a draft of this report, DOD partially concurred with our two recommendations. DOD’s comments are summarized below and reprinted in appendix IV. DOD partially concurred with our first recommendation that it use the key attributes of successful performance measures—including clarity, measurable target, objectivity, reliability, baseline and trend data, and linkage—in refining the performance measures in subsequent updates to the Strategy to improve DOD’s efforts to monitor asset visibility initiatives. DOD stated that it recognizes the need for performance measures to ensure the success of an asset visibility improvement effort but noted that the level of complexity and granularity of the metrics we suggest may not be suitable for all initiatives. DOD also stated that the purpose of the Strategy is to create a framework whereby the components can work collaboratively to coordinate and integrate department-wide efforts to improve asset visibility, not to provide complete direction on how to define, implement, and oversee these initiatives. Additionally, DOD stated that the next edition of the Strategy will encourage the adoption of our six key attributes for asset visibility initiatives to the extent appropriate, but will not mandate their use. As discussed in our report, use of the key attributes in measuring the performance of asset visibility initiatives would help DOD to better assess department-wide progress against the goals in its Strategy and clarify what additional steps need to be taken to enable decision makers to exercise effective oversight. Encouraging adoption of the key attributes, as DOD plans to do, is a positive step, but we continue to believe that DOD needs to use these key attributes to refine its performance measures to monitor the initiatives in the future. DOD partially concurred with our second recommendation that it incorporate into after-action reports information relating to performance measures for the asset visibility initiatives when evaluating and closing these initiatives to ensure that implemented initiatives will achieve the goals and objectives in the Strategies. DOD stated that it is important to capture and review performance data prior to a component closing an asset visibility initiative, but that the Strategy after-action report is not intended to be used to evaluate the success of an asset visibility initiative or to determine if an initiative has met its intended objectives. According to DOD, documentation and information to support the evaluation of initiatives is defined by and executed in accordance with component-level policy and procedures. DOD agreed to update its Strategy to clarify the purpose and use of the after-action reports and to ensure that the Strategy specifies roles and responsibilities for evaluating and closing initiatives. DOD’s response, however, did not state whether and how these updates to the Strategy would result in more consistent incorporation of information relating to performance measures when closing initiatives in the future. As we noted previously in this report, according to the 2015 Strategy, the after-action report for closed initiatives should include information on the objectives met, problems or gaps resolved, and measures of success obtained. We believe our recommendation is consistent with this guidance. Without incorporating this information, DOD does not have assurance that closed initiatives have been fully assessed and have resulted in achieving the goals and objectives of the Strategies. Therefore, we continue to believe that full implementation of our recommendation is needed. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force, and the Commandant of the Marine Corps; the Director of Defense Logistics Agency; the Chairman of the Joint Chiefs of Staff; the Commander of the United States Transportation Command; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5257 or merrittz@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix V. To determine the extent to which DOD identified performance measures that allow it to monitor the progress of selected asset visibility initiatives identified in DOD’s 2014 and 2015 Strategy for Improving DOD Asset Visibility (Strategies), we reviewed documents such as the 2014 Strategy and its subsequent update in October 2015 (2015 Strategy); minutes from the Asset Visibility Working Group meetings; and documents showing the status of the implementation, including charts that track the development and closure of the asset visibility initiatives. Thirty initiatives have been included in the 2014 and 2015 Strategies, but 3 of these were halted, for a variety of reasons. From the remaining 27 initiatives, we selected a non-generalizable sample of 8 initiatives. We selected at least one from each of the components to review and assess, including analyzing the performance measures associated with each initiative. In our selection of 8 initiatives to review, we also considered the stage of implementation of the initiative, to ensure that our review encompassed initiatives at different stages, from some that were just beginning to some that had already been completed. Specifically, we made selections based on the status of the initiatives as of December 2015 to include the earliest completion dates by component. In order to cover a range of initiatives— from some just beginning to some already completed—we selected for review 3 initiatives from the 2014 Strategy that had been closed, 2 ongoing initiatives that had been included in both Strategies, and 3 new initiatives that were included for the first time in the 2015 Strategy. The results from this sample cannot be generalized to the other 19 initiatives. We did not assess the initiatives to determine if they (1) met milestones, (2) lacked resources, or (3) had performance issues. Instead we assessed the initiatives to determine what progress DOD had made toward meeting the criteria for removing an area from our High Risk list. We surveyed program managers and other cognizant officials (hereafter referred to as component officials) responsible for the respective asset visibility initiatives we selected. We included questions in our survey related to the development and closure of the initiatives and took several steps to ensure the validity and reliability of the survey instrument. We also reviewed the Strategies to identify performance measures necessary to monitor the progress of the 8 initiatives we had selected. Two analysts independently assessed whether (1) DOD had followed the guidance set forth in the Strategies and (2) the measures for the initiatives included selected key attributes of successful performance measures (for example, are the measures clear, quantifiable —i.e., have measurable targets and baseline and trend data—objective, and reliable); any initial disagreements in assessments were resolved through discussion. We assessed these measures against 6 of 10 selected key attributes for successful performance measures—clarity, measurable target, objectivity, reliability, baseline and trend data, and linkage—identified in our prior work that we identified as relevant to the sample of initiatives we were examining. The remaining 4 attributes—government-wide priorities, core program activities, limited overlap, and balance—are used to assess agency-wide performance and are not applicable to our analysis, because we did not assess agency-wide initiatives. Because we had selected a subset of the component-level initiatives to review, these attributes did not apply. If all of the performance measures for an initiative met the definition of the relevant key attribute, we rated the initiative as having “fully included” the attribute. On the other hand, if none of the measures met the definition of the relevant key attribute, we rated the initiative as having “not included” the attribute. If some, but not all, of the measure met the definition of the relevant key attribute, then we rated the initiative as having “partially included” the attribute. We also selected sites to observe demonstrations of initiatives that were intended to show how they have achieved progress in improving asset visibility. We selected these demonstrations based on the location of the initiative, the responsible component, and the scope of the initiative. Additionally, we reviewed the after-action reports for all of the initiatives that had been closed—20 of 27 initiatives, including 5 of the 8 initiatives we reviewed in detail—by the Asset Visibility Working Group, as of October 31, 2016. We performed a content analysis in which we reviewed each of these after-action reports to determine whether it was completed for the initiative, documented whether measures were obtained, and identified challenges and lessons learned. One analyst conducted this analysis, coding the information and entering it into a spreadsheet; a second analyst checked the first analyst’s analysis for accuracy. Any initial disagreements in the coding were discussed and reconciled by the analysts. The analysts then tallied the responses to determine the extent to which the information was identified in the after-action reports. We also interviewed component officials and officials at the Office of the Deputy Assistant Secretary of Defense for Supply Chain Integration (hereafter referred to as OSD) to clarify survey responses and to discuss plans to develop the initiatives, including any efforts to monitor progress and demonstrate results. To determine whether DOD had addressed the five criteria—leadership commitment, capacity, corrective action plan, monitoring, and demonstrated progress—that would have to be met for us to remove asset visibility from our High Risk List, we reviewed documents such as DOD’s 2014 and 2015 Strategies and charts that track the implementation and closure of asset visibility initiatives. We included questions in our survey to collect additional information from officials on their efforts to address the high-risk criteria. For example, we asked how the component monitors the implementation of the initiative and whether there has been any demonstrated progress in addressing the opportunity, deficiency, or gap in asset visibility capability that the initiative was designed to address. One analyst evaluated DOD’s actions to improve asset visibility against each of our five criteria for removing an area from the High Risk list. A different analyst checked the first analyst’s analysis for accuracy. Any initial disagreements were discussed and reconciled by the analysts. We assessed DOD’s effort to meet each of the high-risk criteria as “not met” (i.e., none of the aspects of the criterion were addressed), “partially met” (i.e., some, but not all, aspects of the criterion were addressed), or “fully met” (i.e., all parts of the criterion were fully addressed). We shared with DOD officials our preliminary assessment of asset visibility relative to each of the criteria. To help ensure that our evaluation of improvements made relative to the high-risk criteria were consistent with our prior evaluations of Supply Chain Management and other issue areas, we reviewed our prior High Risk reports to gain insight into what actions agencies had taken to address the issues identified in these past reports. Additionally, we interviewed component officials and OSD officials to clarify their survey responses and to discuss plans to continue to make progress in improving asset visibility. We met with officials from the following DOD components during our review: Office of the Secretary of Defense Department of the Army United States Marine Corps Department of the Air Force We surveyed component officials responsible for the asset visibility initiatives we reviewed. We included questions in our survey related to our high-risk criteria. As part of the survey development, we conducted an expert review and pre-tested the draft survey. We submitted the questionnaire for review by an independent GAO survey specialist and an asset visibility subject matter expert from OSD. The expert review phase was intended to ensure that content necessary to understand the questions was included and that technical information included in the survey was correct. To minimize errors that might occur from respondents interpreting our questions differently than we intended, we pre-tested our questionnaire with component officials and other cognizant officials for 4 of the initiatives. During the pre-tests, conducted by telephone, we asked the DOD officials to read the instructions and each question aloud and to tell us how they interpreted the question. We then discussed the instructions and questions with them to identify any problems and potential solutions by determining whether (1) the instructions and questions were clear and unambiguous, (2) the terms we used were accurate, (3) the questionnaire was unbiased, and (4) the questionnaire did not place an undue burden on the officials completing it. We noted any potential problems and modified the questionnaire based on feedback from the expert reviewers and the pre-tests, as appropriate. We sent an email to each selected program office beginning on June 16, 2016, notifying them of the topics of our survey and when we expected to send the survey. We then sent the self-administered questionnaire and a cover email to the asset visibility program officials on June 20, 2016, and asked them to fill in the questionnaire and email it back to us by July 6, 2016. We received 8 completed questionnaires, for an overall response rate of 100 percent. We also collected data—such as the number of RFID tags and number of inventory amounts for clothing and textiles—from a sample of initiatives. The practical difficulties of conducting any survey may introduce errors, commonly referred to as non-sampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, how the responses are processed and analyzed, or the types of people who do not respond can influence the accuracy of the survey results. We took steps, as described above, in the development of the survey, the data collection, and the data analysis to minimize these non-sampling errors and help ensure the accuracy of the answers that we obtained. Data were electronically extracted from the questionnaires into a comma-delimited file that was then imported into a statistical program for analysis. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error, and we addressed such issues as necessary. Our survey specialist conducted quantitative data analyses using statistical software, and our staff conducted a review of open-ended responses with subject matter expertise. A data analyst conducted an independent check of the statistical computer programs for accuracy. We conducted this performance audit from February 2016 to March 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides an overview of the non-generalizable sample of initiatives that we reviewed. These initiatives are intended to improve asset visibility as part of the Department of Defense’s (DOD) 2014 Strategy for Improving DOD Asset Visibility (2014 Strategy) and its subsequent update in October 2015 (2015 Strategy). The process by which we selected these initiatives for this review is described in appendix I. The initiatives are shown in table 5. In 1990, we began a program to report on government operations that we identified as “high risk,” and we added the Department of Defense’s (DOD) supply chain management area to our High Risk List. Our high-risk program has served to identify and help resolve serious weaknesses in areas that involve substantial resources and provide critical services to the public. Our experience with the high-risk series over the past two decades has shown that the key elements needed to make progress in high-risk areas are congressional action, high-level administrative initiatives, and agencies’ efforts grounded in the five criteria we established for removing an area from the high-risk list. These five criteria form a road map for efforts to improve and ultimately address high-risk issues. Addressing some of the criteria leads to progress, while satisfying all of the criteria is central to removing an area from the list. These criteria call for agencies to show the following: 1. Leadership Commitment—a strong commitment and top leadership support. 2. Capacity—the capacity (i.e., the people and other resources) to resolve the risk(s). 3. Corrective Action Plan—a plan that defines the root causes and solutions and provides for substantially completing corrective measures, including steps necessary to implement the solutions we recommended. 4. Monitoring—a program instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. 5. Demonstrated Progress—the ability to demonstrate progress in implementing corrective measures and resolving the high-risk area. We have reported on various aspects of DOD’s supply chain, including asset visibility, and noted that DOD has taken several actions to improve asset visibility. We also noted a number of recommendations, actions, and outcomes needed to improve asset visibility, as shown in table 6. Specifically, in an October 2014 management letter to a senior Office of the Secretary of Defense (OSD) official, we reported on 7 actions and outcomes across the 5 criteria that we believed DOD should take to address long-standing weaknesses in asset visibility. Most recently, in our January 2015 report and February 2015 High Risk update, we reported on progress that DOD has made in addressing weaknesses in its asset visibility, including developing its 2014 Strategy for Improving DOD Asset Visibility, and we made a number of recommendations. In addition to the contact named above, Carleen C. Bennett, Assistant Director; Mary Jo LaCasse; Joanne Landesman; Amie Lesser; Felicia Lopez; Mike Silver; John E. Trubey; Angela Watson; and John Yee made key contributions to this report. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. High-Risk Series: Key Actions to Make Progress Addressing High-Risk Issues. GAO-16-480R. Washington, D.C.: April 25, 2016. Defense Logistics: DOD Has Addressed Most Reporting Requirements and Continues to Refine its Asset Visibility Strategy. GAO-16-88. Washington, D.C.: December 22, 2015. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Defense Logistics: DOD Has a Strategy and Has Taken Steps to Improve its Asset Visibility, But Further Actions are Needed. GAO-15-148. Washington, D.C.: January, 27, 2015. Defense Logistics: A Completed Comprehensive Strategy is Needed to Guide DOD’s In-Transit Visibility Efforts. GAO-13-201. Washington, D.C.: February 28, 2013. High-Risk Series: An Update: GAO-13-283. Washington, D.C.: February 14, 2013. Defense Logistics: Improvements Needed to Enhance DOD’s Management Approach and Implementation of Item Unique Identification Technology. GAO-12-482. Washington, D.C.: May 3, 2012. Defense Logistics: DOD Needs to Take Additional Actions to Address Challenges in Supply Chain Management. GAO-11-569. Washington, D.C.: July 28, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. DOD’s High-Risk Areas: Observations on DOD’s Progress and Challenges in Strategic Planning for Supply Chain Management. GAO-10-929T. Washington, D.C.: July 27, 2010.
GAO designated DOD's supply chain management as a high-risk area in 1990 and in February 2011 reported that limitations in asset visibility make it difficult to obtain timely and accurate information on assets that are present in a theater of operations. DOD defines asset visibility as the ability to provide timely and accurate information on the location, quantity, condition, movement, and status of items in its inventory. In 2015, GAO found that DOD had demonstrated leadership commitment and made considerable progress in addressing weaknesses in its supply chain management. This report addresses the extent to which DOD has (1) identified performance measures that allow it to monitor the progress of selected asset visibility initiatives identified in its Strategies ; and (2) addressed the five criteria—leadership commitment, capacity, corrective action plan, monitoring, and demonstrated progress—for removing asset visibility from the High Risk List. GAO reviewed documents associated with selected initiatives, surveyed DOD officials, and observed demonstrations. The Department of Defense (DOD) has identified performance measures for the eight selected asset visibility initiatives GAO reviewed, but these performance measures generally cannot be used to monitor progress. Specifically, GAO found that the measures for the eight initiatives reviewed did not generally include key attributes of successful performance measures. For example, for six initiatives there were no baseline and trend data associated with the measures. While DOD's 2014 and 2015 Strategy for Improving DOD Asset Visibility ( Strategies ) called for performance measures to be identified for the initiatives, the Strategies lacked complete direction on how to develop performance measures that would allow DOD to assess the progress of the initiatives toward their intended outcomes. GAO also found that after-action reports for the initiatives did not always include key information needed to determine the success of the initiatives in achieving the goals described in the Strategies . Without improved performance measures and information to support that progress has been made, DOD may not be able to monitor and show progress in improving asset visibility. DOD has made progress and meets the criteria related to capacity and its corrective action plan but needs to take additional actions to monitor implementation and demonstrate progress to meet GAO's two remaining criteria for removal from the High Risk List, as shown in the figure. For the capacity criterion, in its draft update to the 2015 Strategy , DOD provides guidance on how to document cases where the funding for the initiatives is embedded within the overall program funding. Also, for the action plan criterion, DOD included matrixes in its 2015 Strategy to link ongoing initiatives to the Strategy 's goals and objectives. DOD has also taken steps to monitor the status of initiatives. However, the performance measures for the selected initiatives that GAO reviewed generally cannot be used to track progress and are not consistently incorporated into reports to demonstrate results. Until these criteria are met, DOD will have limited ability to demonstrate sustained progress in improving asset visibility. GAO recommends that DOD use key attributes of successful performance measures in refining measures in updates to the Strategy and incorporate information related to performance measures into after-action reports for the asset visibility initiatives. DOD partially concurred with both recommendations. The actions DOD proposed are positive steps, but GAO believes the recommendations should be fully implemented, as discussed in the report.
When Social Security was enacted in 1935, the nation was in the midst of the Great Depression. About half of the elderly depended on others for their livelihood, and roughly one-sixth received public charity. Many had lost their savings. Social Security was created to help ensure that in the future the elderly would have adequate incomes in retirement and would not have to depend on welfare. Instead, the new program would provide benefits based on the payroll tax contributions of workers and their employers. Today Social Security is much more than a retirement program. In 1939 Social Security coverage was extended to the dependents of retired and deceased workers and in 1956 to the disabled. Over one- third of beneficiaries receive benefits for reasons other than old age. Our work on Social Security reform has emphasized the need for change not only because future program revenues are expected to fall short of what is needed to pay currently scheduled benefits in full but because Social Security, Medicare, and Medicaid taken together will consume an increasing share of the budget and the economy. To move into the future with no changes in federal health and retirement programs is to envision a very different role for the federal government. Little room would be left for other federal spending priorities such as national defense, education, and law enforcement. Absent changes in the structure of Social Security and Medicare, some time during the 2040s government would do nothing but pay interest on the debt and mail checks to retirees. Accordingly, substantive reform of Social Security and health programs remains critical to recapturing our future fiscal flexibility. Overall, the federal budget is facing unsustainable deficits and debt. Our most recent long-term budget simulations provide a compelling illustration of how unsustainable the long-term fiscal outlook is under current policies. As shown in figure 2, the long-term outlook under plausible assumptions is bleak. A demographic shift will begin to affect the federal budget in 2008 as the first baby boomers become eligible for Social Security benefits. Over time, this shift will cause spending for federal health and retirement programs including Social Security to swell. Long-term commitments for these and other federal programs will drive a massive imbalance between spending and revenues that cannot be eliminated without difficult policy choices and ultimately significant policy changes. As figure 2 shows, contrary to popular perception, although Social Security grows in size, it is not the major driver of the long-term fiscal challenge. Spending for Medicare and Medicaid is expected to grow much faster. Many specific solutions have been proposed for Social Security, but approaches to reducing health care cost growth remain elusive. Moreover, addressing federal programs such as Medicare and the federal-state Medicaid program will need to involve changes in the health care system of which they are a part. This will be a societal challenge affecting all age groups. While Social Security reform alone cannot eliminate the long-term fiscal challenge, the likely effects of reform on the nation’s fiscal future should be clearly understood and taken into account. Social Security’s benefit payments and program receipts are tracked in federal budget accounts that are known as trust funds. Trust funds are one type of mechanism created to account for receipts that are dedicated to a specific fund for a specific purpose. Social Security has two trust funds, the Old-Age Survivors Insurance (OASI) Trust Fund and the Disability Insurance (DI) Trust Fund. The combined OASDI trust fund comprises the financial resources of the Social Security system. Social Security has a permanent appropriation that permits the payment of benefits as long as the relevant trust fund account has a sufficient balance. Social Security’s outlays are limited to trust fund balances, but the program’s outlays and revenues are also part of the federal unified budget. Today, Social Security payroll tax revenues exceed benefits. In 2005, the Social Security trust fund paid $530 billion in benefit payments and administrative costs and took in $608 billion in cash revenues, leaving a cash surplus of about $78 billion. By law, the Social Security trust fund must invest any cash surpluses in interest-bearing federal government securities. Throughout its history, Social Security has invested mostly in a special type of nonmarketable securities that, like debt held by the public, are guaranteed by the full faith and credit of the U.S. government. Treasury borrows the cash from Social Security’s surplus to pay for other government expenses, and this use of Social Security’s excess cash revenues reduces the amount Treasury would otherwise need to borrow from the public to finance other federal programs. (See fig. 3). These excess cash revenues, however, will begin to diminish in 2009, one year after the oldest members of the baby boom generation first become eligible for Social Security old age benefits. This downturn in the Social Security cash surplus—the difference between payroll taxes and benefits paid—will begin a squeeze on the rest of the budget that will worsen in the coming years, making less cash revenue available for other federal priorities. By 2017 trust fund cash revenues will be inadequate to pay currently scheduled benefits in full, and the Social Security trust fund will need to redeem trust fund assets from the Treasury. To pay the trust fund, Treasury will need to provide cash from general revenues in exchange for those trust fund securities. This can come only through increased revenue, increased borrowing from the public, reduced spending in the rest of the government, or some combination of these. While the trust fund is redeeming its securities, it will continue to pay full benefits, but the redemptions will reduce overall federal budgetary flexibility. As we have said previously, Social Security reform proposals will need to be evaluated on a number of criteria. Our work on various aspects of this important program has emphasized that Social Security reform is about more than solvency. To evaluate reform proposals, we have suggested that policy makers should consider three basic criteria: 1. the extent to which the proposal achieves sustainable solvency and how the proposal would affect the economy and the federal budget; 2. the balance struck between the twin goals of individual equity (rates of return on individual contributions) and income adequacy (level and certainty of benefits); and 3. how readily such changes could be implemented, administered, and explained to the public. Our first criterion of sustainable solvency reflects the need to look at Social Security reform both in terms of its trust fund and in the larger context of the federal budget as a whole. It is different from OCACT’s definition, which is focused solely on the trust fund, for which they are responsible. From a micro perspective, projected trust fund balances can provide a vital though imperfect signaling function for policymakers about underlying fiscal imbalances in covered programs. Tracking the estimated future balances makes it possible in turn to estimate how much more funding is needed to pay for the benefits scheduled in current law. A shortfall between the long-term projected fund balance and projected costs can signal that the fund, either by design or because of changes, is collecting insufficient monies to finance currently scheduled future payments. This signaling device can eventually prompt policymakers to action. From a macro perspective, however, program solvency measures such as the trust fund exhaustion date and the actuarial balance calculation provide no information about the broader question of program sustainability—that is, the capacity of the future economy and the federal unified budget to pay program benefits over the long run. When a program is not fully self-financed, as is the case with Social Security, projected accumulated trust fund balances do not necessarily reflect the full future cost of existing government commitments. Accordingly, trust fund balances are not an adequate measure of Social Security’s sustainability. The critical question is whether the Nation and the government as a whole can afford the benefits in the future and at what cost in terms of other claims on scarce resources. Extending a trust fund’s solvency without reforms to make the underlying program more sustainable over the long term can obscure the warning signals that trust fund balances provide, thereby creating a false sense of security and delaying needed program reform. Evaluating proposals is a complex task involving trade-offs between competing goals. Reform proposals should be evaluated as packages that strike a balance among individual reform elements and important interactive effects between these elements. The overall evaluation of any particular reform proposal depends on the weight individual policy makers place on each criterion. Since its establishment in 1935 Social Security has been financed primarily by payroll taxes contributed equally by employers and employees. Both tax rates and benefits have changed over time, but Congress has generally rejected proposals for including general revenue in financing Social Security benefits. Payroll tax rates have increased, from a total of 2 percent of taxable payroll in 1937—when payroll taxes were first collected—to 12.4 percent today. Benefits have also been expanded to include workers’ families and the disabled. The question of whether some general revenue should be used to minimize the burden of payroll taxes has been debated since the program’s inception. The Committee on Economic Security (CES), tasked by President Roosevelt with designing the program, believed that expected benefit payments would exceed expected payroll tax revenues beginning about 1965 and at that time general revenue should be used to fill the gap. Under this financing arrangement, the general revenue share was ultimately expected to reach about one-third of total revenues. President Roosevelt rejected the idea of using general revenue in program financing. He endorsed payroll tax financing on the grounds that it would ensure the new program would be “self-supporting.” A perceived link between benefits and payroll tax contributions would, he believed, serve to preserve the program in the future. Using general revenue would make the program welfare—in President Roosevelt’s words, “the dole by another name.” President Roosevelt’s financing approach envisioned the buildup of a reserve fund that would serve to fund benefits in the long term, but objections were made to this approach. Some believed that the existence of a large reserve fund would lead to higher benefit levels or other increased government spending; others objected to the underlying concept of prefunding benefits which they believed would lock in specific levels of support for aged beneficiaries in the future. Congressional changes to the program that expanded benefits and postponed scheduled payroll tax increases put the program on a pay-as-you-go basis, that is, revenues from current workers in a given year pay for the benefits of current beneficiaries in that year. Nevertheless, the issue of whether general revenue should be used to supplement payroll tax financing has recurred throughout Social Security’s history. During short-term financing crises in the late 1970s and early 1980s, proposals were made for general revenue use. At that time some opposed this use of general revenue believing that it would obscure the true cost of the program and lead to benefit expansion. Reform legislation passed in 1983 did include permanent use of some general revenue by imposing a new income tax on the Social Security benefits of upper income retirees and dedicating that tax to the trust fund. Although the income thresholds were not indexed to inflation, amounts of revenue to Social Security from this source have been and remain small relative to total program tax revenue. Since the legislation passed in 1977 and 1983, a temporary build up of trust fund assets has caused Social Security to temporarily deviate somewhat from pay-as-you-go financing. This occurred in part because the large baby boom generation makes the size of the workforce large relative to the beneficiary population. As the baby boom generation retires and is replaced by a workforce that will grow less rapidly than in the past, trust fund assets will be redeemed to pay benefits, but these assets plus payroll tax revenues will eventually be insufficient to pay currently scheduled benefits in full. To deal with this structural imbalance, many proposals have included the use of general revenue to supplement payroll tax financing—often in large amounts relative to total financing and over extended time periods. For example, both proposals made by President Clinton in 1999 and reform models put forward in 2003 by the Commission to Strengthen Social Security established by President George W. Bush included the use of general revenue. OCACT scoring memos are the primary source of information on recent Social Security reform proposals. Since the mid-1990’s, OCACT has scored a wide variety of comprehensive reform proposals. Each proposal modifies the OASDI program using one or several of the following provisions that: (1) reduce benefits, e.g. through changes to indexing formulas and/or other methods; (2) increase benefits for special populations; (3) raise revenue through payroll tax increases; (4) use general revenue financing through a range of mechanisms (not always specified as general revenue); (5) invest trust fund assets through government investment in marketable securities; or (6) change the current structure of the program by creating IAs. The format of these scoring memos has evolved over time, partially in response to feedback from users. More recent scoring memos are available on OCACT’s web site. In addition, OCACT has begun to post estimates of many of the stand-alone provisions that have been suggested to modify the Social Security program and improve its financial status. Many of these are also included in the various comprehensive proposals. OCACT’s scoring memos typically emphasize two important summary measures: (1) the change to actuarial balance and (2) ability of the trust fund to meet obligations throughout the 75-year period and beyond, an indicator of “sustainable solvency” as defined by OCACT. In recent years, sustainable solvency as defined by OCACT has become the standard by which reform proposals are measured. When OCACT evaluates a reform proposal as meeting the definition of sustainably solvent, the proposal sponsor often highlights this point in press statements and other statements. This measure, however, considers only trust fund effects and not the effect of the proposal on the federal budget. To discover crucial information on a proposal’s general revenue use and federal budget effects, a user of OCACT’s scoring memos must consult the detailed tables at the end of the memo. The type and amount of information included in the tables has increased over time. Current tables are generally comprehensive in presenting a proposal’s financial effects. Columns display year-to-year changes in the financial operations of the trust fund and the unified budget as well as the cashflow between the trust fund and the general fund of the U.S. Treasury. For those plans that include either government or IA investment in equity markets, OCACT publishes two sets of tables, one reflecting “expected-yield” assumptions on investments and a second set reflecting “low-yield” assumptions. A few of the scoring memos we analyzed included other tables that provide insight into general revenue use. One of these tables shows the impact of each individual provision on the long-range actuarial balance (as a percentage of payroll). This table shows to what extent a single provision, by itself, either improves or worsens the actuarial balance. Budget experts we spoke with agreed that the tables had evolved and now provide more information but they also said that the tables are difficult to use and take much effort to understand. In particular, they said that the “information is not reader-friendly” and “key estimates of general revenue are not highlighted and it is not always clear if there is a specified source for the general revenue.” Our analysis of 17 recent scoring memos generally confirmed this assessment. Policymakers and the public may have a difficult time comparing how different plans get to sustainable solvency and the implications for the rest of the budget, future deficits, and debt held by the public. Similarly, the 1999 and 2003 Technical Panels on Assumptions and Methods, convened by the Social Security Advisory Board, expressed concerns about consistency in presentation of information in scoring memos. One recommendation made by budget experts was for an up-front summary table with crucial information that a reader needs in order to compare plans. Although we found no agreement on precise content for the table, information about benefit cuts, tax increases, and general revenue could somehow be included. OCACT staff told us that the principal purpose of their scorings was to show the effect of a proposal on trust fund solvency and not on the budget as a whole. They added that they were generally satisfied with the current format for scoring memos, noting that they had not received any negative feedback from users. They did agree that information could be more user- friendly and are considering ways to achieve this goal, for example through the inclusion of visuals. OCACT staff further noted that scoring proposals is resource-intensive. Although some scorings can take up to a year, others must be done under tight timeframes, e.g., when sponsors are planning to introduce legislation. They also told us that they have discussed options for a summary table with selected users but had not found any consensus on what information should be highlighted. OCACT staff emphasized that their scorings are and need to continue to be perceived as objective. In any case, OCACT staff did not think that the use of general revenue should be highlighted above other proposal changes. More recently, CBO has developed the capacity to do long-term estimates of Social Security reform proposals and has completed five long-term scoring memos to date. OCACT and CBO scoring memos have key substantive and presentational differences. One particularly important difference is the use of different economic assumptions, which results in CBO currently having more optimistic estimates of current-law program finances. Although budget experts we spoke with generally thought that CBO scoring memos have been beneficial analytical tools, they also thought that OCACT scoring memos were likely to continue to be the primary source of information in any debate over reform proposals. Therefore, having OCACT scoring memos provide clear and easily accessible information on any use of general revenue and on the impact of any reform proposal on the broader budget remains important. Almost all proposals we reviewed package multiple revenue options and/or benefit changes to achieve sustainable solvency, but they differ in both the broad approach they take for dealing with the long-range solvency problem and the revenue mechanisms they use. Reform approaches can be divided into two broad categories: approaches that maintain the current structure, that is, a pay-as-you-go social insurance program of defined benefits paid for primarily by payroll tax revenue, and approaches that create a new structure that includes IAs. Provisions that guarantee revenue to the Social Security trust fund can be classified as: reallocated general revenue mechanisms, payroll tax mechanisms, or new general revenue mechanisms (see Text Box). A reallocated general revenue mechanism is any provision that increases revenue to the program by redirecting existing general revenue expected under existing law to the trust fund. Payroll tax mechanisms directly change the amount of payroll taxes contributed and therefore increase payroll taxes flowing into the trust fund. A new general revenue mechanism would establish a new source of income to the general fund and dedicate it to the Social Security trust fund. Examples would be the creation of a national sales tax or increases in income or excise taxes with the revenue from any of these dedicated to the Social Security trust fund. None of the proposals we examined introduced new general revenues. All of them—although described and characterized in various ways—used only mechanisms that would reallocate general revenue and/or increase payroll taxes. All 17 of the Social Security reform proposals we analyzed include at least one mechanism to increase revenue to Social Security and some use more than one mechanism. Fourteen of the 17 proposals we examined would reallocate general revenue, that is, transfer existing revenue from the general fund; the other 3 proposals did not use general revenue. The 14 proposals that used reallocated general revenue did so by means of five different mechanisms. These mechanisms can be characterized as providing for either unlimited or limited amounts of general revenue financing. The first mechanism provides for unlimited general revenue; the other four use varying means to provide limited or defined amounts of general revenue financing. Unlike plans with unlimited transfers to assure trust fund solvency, proposals with general revenue transfers limited by specified amounts, source, or formula, could be insolvent at some point in time if the actual financial condition of the program differs from the OCACT estimates. Unlimited general revenue transfers to the trust fund of whatever amount is necessary to maintain trust fund solvency (e.g., a 100 percent trust fund ratio). This mechanism is the most frequently used general revenue option. It is found in 9 of the 17 plans we examined. This provision usually states that general revenue transfers are to be made if, at any time, the combined OASDI trust fund ratio is projected to fall below 100 percent under the provisions of the plan. Transfers of sufficient amount and timing will be made to prevent the trust fund from falling below 100 percent of the annual program cost. In simple terms, funds sufficient to pay projected benefits for the year, that is, to maintain solvency, would be transferred as needed from the general fund to the Social Security trust fund without regard to the amount. This provision alone guarantees program solvency under any circumstance because it provides the trust fund with an unlimited and open-ended draw on the general fund. General revenue transfers specified by formula or amount. Six plan sponsors propose using this mechanism, which specifies—in actual dollars, as a percentage of taxable payroll or using a formula—how much general revenue would be transferred to the trust fund in a given year. Transfers would be limited by these specifications. In other words, transfer amounts are made independent of the financial condition of the program as measured by the trust fund ratio). “Refundable tax credits” for individual add-on accounts (or to individuals to offset account contributions). Alone among the general revenue options discussed, under this mechanism revenue would not be transferred to the OASDI trust fund, but would be outlaid immediately— either to provide funding to individual add-on accounts or to offset the cost of those accounts. The four proposals using this option would either credit the general revenue directly to the workers’ add-on accounts (the amount would be determined by the plan’s provisions) or include the amount as a credit on the individuals’ income taxes to partially offset the payroll tax increase introduced to fund the account. Although in these proposals the add-on accounts would be financed outside of the current program—either entirely or partially funded using reallocated general revenue—they would be considered part of a new Social Security system. Dedication of revenue generated from the estate tax to the trust fund. This revenue option would dedicate revenue from the estate tax to the Social Security trust fund to help finance the current structure. One proposal would permanently establish a tax of 45 percent on all estates of deceased taxpayers with taxable assets in excess of $3.5 million (as in current law for 2009). The tax revenue would be dedicated to the OASDI trust fund instead of to the general fund. Redirection of those revenues from Social Security benefit taxation that now go to the Medicare Hospital Insurance (HI) trust fund to the OASDI trust fund. Currently, up to 85 percent of an individual’s or couple’s OASDI benefits may be subject to federal income taxation if their income exceeds certain thresholds. The income tax revenue attributable to the first 50 percent of OASDI benefits is already dedicated to the Social Security trust fund, but the revenue associated with the amount between 50 and 85 percent of benefits is dedicated to the Medicare HI trust fund. Two proposals would dedicate all of the income from the tax on OASDI benefits to OASDI. Reform plans with individual accounts may have indirect effects on income from benefit taxation. Proposal sponsors typically stipulate whether disbursements from individual accounts would be taxed like current Social Security benefits or not taxed at all. If account disbursements are considered OASDI benefits for income tax purposes, income to the trust fund could be greater (or smaller) in cases where the combined traditional benefit and account disbursement yields are greater (or lesser) than under current law. There would be similar implications for the HI trust fund if benefit taxation income is distributed as under current law. Plans that reduce the taxable traditional OASDI benefits and do not tax individual account distributions would lower trust fund revenue from this source. Of the 17 proposals, 7 increase payroll tax revenue using one or more of the following four mechanisms: Raise or eliminate the “taxable maximum limit” or “cap” on covered earnings (with or without retaining the cap for benefit calculation). Incorporated in five proposals, this is the most common mechanism for bringing in new payroll tax revenue. This mechanism would not change the 12.4 percent tax rate but would either increase the level of wages taxed or completely eliminate the cap so that all covered earnings are taxed. This latter option is similar to the Medicare HI payroll tax of 2.9 percent, which applies to all covered earnings. SSA recently estimated that in 2005 about 84 percent of covered earnings were subject to the OASDI tax (i.e., were taxable) and projected decreases in the ratio of taxable wages to covered wages through 2015. After 2015, SSA expects this percentage to be held approximately constant at 82 percent of covered earnings. Four plans propose to increase the percentage of taxable earnings under the “cap” to a level between 87 and 90 percent of covered earnings. A fifth plan would completely eliminate the earnings cap and would tax all earnings at the 12.4 percent rate. Increase the 12.4 percent payroll tax on taxable earnings. Four plans propose raising payroll tax rates by between 1 and 3 percentage points. None of the plans we reviewed propose an immediate tax rate increase of 2.02 percentage points, the estimated increase needed to achieve 75-year solvency through year 2080. Expand coverage to state and local government employees not currently covered. Three plans would require those public employers not currently providing Social Security coverage to cover newly hired employees. Tax covered earnings above the “cap” but at a lower tax rate (with or without retaining the “cap” for benefit calculation). Two proposals apply a rate much lower than 12.4 percent, between 3 and 4 percent, to covered earnings above the established taxable maximum. Some of the mechanisms to increase payroll tax revenue could also result in increased benefit costs. For example, proposals that raise or eliminate the “cap” on covered earnings or tax earnings above the “cap” may or may not include these wages when calculating benefits. If the wages are included in the benefit formula, benefit costs would increase in the future and the improvement in the actuarial deficit would be smaller than if the wages were not included in the benefit calculation. Expanding coverage to all state and local government workers would bring in additional payroll tax revenue but would also increase the long-term benefit costs as newly covered earnings would entitle affected workers to the associated benefits. None of the five general revenue mechanisms in the plans we examined would come from new revenue sources and hence would bring no new revenue to the federal budget; the four payroll tax mechanisms would bring new revenue to the budget as a whole. Table 1 categorizes the mechanisms in terms of this framework. The number in parentheses indicates the number of reform proposals that contain this mechanism. Although there is no analytic link between the inclusion of IAs and the selection of a specific revenue mechanism, in our review we found that most of the time reallocated general revenue mechanisms are used to help structure a new Social Security system with IAs. On the other hand, payroll tax mechanisms are used about half the time to help finance the current program and about half the time in proposals creating a new system including IAs. Table 2 summarizes the reform approach and revenue mechanisms used in reform plans. The numbers in parentheses indicate the number of reform proposals that contain a particular mechanism. Most proposals—15 of the 17 we reviewed—included provisions aimed at increasing revenue through investment in private markets. Two proposals used direct government investing in marketable securities through the current program structure and 13 created a new Social Security structure including individual accounts. Investing in marketable securities creates the potential for improved returns but increases investment risk for the investing party (the government or individuals). Therefore, unlike reallocated revenue and payroll tax mechanisms, neither investment approach assures additional income. Proposals for government investment of the trust fund anticipate returns that would increase revenue to the trust fund while in most IA proposals the benefit obtained from any increased returns would be credited to the individuals’ accounts and generally included as part of the account distribution. Individual account proposals may redirect revenue from the program or the federal budget but typically compensate for lost revenue through either across-the-board benefit cuts or “benefit offsets” to currently-scheduled benefits. In these proposals, the “total benefit” from the new Social Security system consists of a combination of the traditional Social Security defined benefit (including any modifications/offsets) and the individual account distribution (including any modifications/offsets). Some plans establishing IAs propose to guarantee benefit levels irrespective of actual returns; they are able to do this by using general revenue transfers. Guarantees would benefit account holders by partially or fully protecting them from risk. However, the increased benefits to account holders would create a corresponding cost for the federal government. Whenever an account fell short of promised benefits, the government—and, implicitly, taxpayers—would make up the difference. The effect of a reform proposal on federal budget balances and debt cannot be determined from its effect on trust fund solvency. The proposals we reviewed illustrate this. All plans included in our review were scored by OCACT as able to pay the plan’s benefits in full over the 75-year period. However, plans’ impact on the federal budget as a whole varied widely in the scorings. For example, the effect on debt held by the public ranged from an improvement of $45 trillion to a worsening of $41 trillion over the 75-year projection period. The impact of a proposal package on the federal budget is shown in the year-by-year scoring of effects on unified deficits and debt that OCACT provides in its technical tables. That the impact on the trust fund and the impact on the budget as a whole can differ is not surprising. By definition, any increase in revenue provided to the trust fund—whether new general revenue, reallocated general revenue, or increased payroll tax revenue—will increase the trust fund’s capacity to pay benefits. Effects on the federal budget, however, depend on the type and amount of revenue and also on assumptions about payment of currently scheduled benefits. We compare the impact of new and reallocated revenue on the budget and long-term fiscal outlook under two different assumptions about the payment of currently scheduled benefits beyond projected trust fund exhaustion in 2040. First, assume as OCACT does in its memos and GAO does in its long-range simulations, that currently scheduled benefits would be paid in full throughout the estimating period (i.e., borrowing would increase to fund the benefits). Under these assumptions—and assuming no other changes in spending and/or revenue—either new general revenue or additional payroll tax revenue would replace some of that borrowing and improve the long-term fiscal outlook. Reallocated general revenue equal to (or less than) the Social Security financial shortfall would have no impact on federal budget deficits, debt, or the long-term fiscal outlook. In amounts greater than the shortfall, reallocated general revenue would make the long-term outlook worse, all other things equal. As an alternative, assume instead that benefit outlays will be limited to trust fund income once the trust fund has reached exhaustion in 2040. Under this alternative, federal budget balances would be the same through 2040 as under the first assumption, then improved over the longer term due to lower annual outlays and less borrowing. Under this “trust fund exhaustion scenario,” new revenue dedicated to Social Security early in the projection period would improve annual budget balances and extend the time period during which currently scheduled benefits could be paid in full. The new revenue would also reduce debt for most of the 75-year period. Any amount of reallocated general revenue on the other hand would increase federal budget deficits, reduce budgetary flexibility, and increase debt held by the public relative to this alternative assumption of trust fund exhaustion. All else equal, the reallocated general revenue would provide additional income to the trust fund and make possible additional benefit outlays, but borrowing from the public would be needed to pay for these outlays. Use of different time frames can also lead to different conclusions about the federal budget effects of additional revenue including reallocated general revenue. For example, some proposals that restructure the Social Security system to rely more on individual accounts—funded in part through a “carve-out” of current payroll tax revenues—use large amounts of reallocated general revenue at the outset to help make up the gap as benefit reductions from currently scheduled levels are phased in. Those favoring this approach to system restructuring may view the reallocated general revenue as a loan from the rest of the budget that will be paid back. Advocates for these types of changes point out that once the transition to the new system is complete, the cost of the Social Security program will have been reduced compared to paying currently scheduled benefits in full. Some favoring program restructuring have advocated use of an infinite horizon rather than the 75-year time frame traditionally used for actuarial assessment of the trust fund. These analysts view 75 years as an arbitrary cut-off point. They note that the use of this horizon can be misleading where a gap between projected revenues and benefit payments continues to grow after the 75-year window, as is the case with the current program. Those who oppose using reallocated general revenue to achieve system restructuring include an emphasis on a shorter time frame in their analyses. They point to higher levels of federal spending and debt held by the public over at least the next several decades resulting from this approach to reform. These analysts emphasize that it is in this nearer time frame the baby boom generation will retire and the cost to the government will escalate dramatically, driven by demographics and compounded by federal spending on health. These analysts were concerned that revenue used for Social Security will not be available for Medicare and Medicaid, and absent changes in fiscal policy, spending on the three major entitlements will lead to unsustainable levels of debt long before Social Security restructuring will have reduced federal commitments for that program. These analysts called for a focus on the long-term federal budget problem as a whole and a search for solutions to Social Security’s financing problems within that larger context. In concept, the mechanism of unlimited reallocated general revenue as needed to assure trust fund solvency, used in 9 of the 17 proposals we reviewed, represents the largest potential draw on the federal budget. The amount of reallocated general revenue actually provided to the trust fund in these proposals would vary depending on the financial requirements of the Social Security program, and these would depend on the other proposal provisions. The other four general revenue mechanisms would use specified amounts of reallocated general revenues. That is, the amount of general revenue used would not vary according to the financial requirements of the Social Security program but would be dictated by the parameters of the mechanism, e.g., specified in current dollars or as a share of taxable payroll in specific years. In terms of size, amounts of reallocated general revenue used by mechanisms in proposals we reviewed varied widely, ranging up to over 200 percent of total program financial shortfall. Estimates for general revenue deriving both from the mechanism of unlimited reallocated general revenue as needed to assure trust fund solvency and from specified general revenue transfer amounts were large in some cases. Table 3 shows amounts of reallocated general revenue by type of mechanism. Some of the budget experts with whom we spoke suggested an approach we did not find in any of the reform proposals we examined. These experts suggested that plans could establish a new source of general revenue and dedicate the new revenue to the Social Security trust fund. For example, they suggested that instead of payroll tax increases, income taxes could be raised or a value-added tax instituted with all or part of the revenue dedicated to Social Security. Some of the budget experts we spoke with observed that any policy decision to introduce new revenues dedicated to a particular program could have implications for the capacity of the rest of the budget to deal with other fiscal challenges. Enactment of any new taxes for Social Security could affect the public’s willingness to bear taxes to fund other important national priorities, such as Medicare. In addition, taxes may have effects on individuals’ saving behavior and on labor supply, effects that are beyond the scope of this report. Despite their differing views on reform approaches, budget experts generally either believed or advocated that some use of general revenue would be part of reform. Some of them were concerned about the use of reallocated general revenue for Social Security in view of the long-term fiscal challenge facing the nation. One emphasized that the public needs to understand that reallocated general revenue for Social Security is not “free.” Reallocated general revenue would need to be paid for now or later through lower spending, higher taxes, and/or more debt. Another expert expressed the view that general revenue would be needed to reduce the political pain involved in reform but cautioned that using general revenue for Social Security could mean an even larger share of federal resources committed to funding programs that serve the elderly in coming decades, further squeezing out other national priorities. Most experts expressed the view that greater transparency about the use of general revenue use in reform plans was needed. Most budget experts with whom we spoke expressed concern about any provision of reallocated general revenue as needed for Social Security to assure trust fund solvency. These analysts, including some who generally do not find trust fund accounting meaningful, said that the signaling provided by the Trustees’ projected trust fund exhaustion date has served a useful purpose by alerting policymakers and the public of the need for program reform. Providing for the use of reallocated unlimited general revenue transfers to achieve sustainable solvency would mean that the trust fund would never be projected to reach exhaustion. As a result, these analysts observed, the true costs of the program would become less transparent while at the same time the public might think that the Social Security financing shortfall had been resolved. One expert was especially concerned that explicit guarantees that total payouts from accounts plus Social Security would be not less than currently scheduled benefit levels could prove expensive for the federal budget. Such guarantees will likely add to the cost of the Social Security system, because individuals will have protection against downside risks but are allowed by a guarantee to benefit on the upside, this expert said. In earlier work we noted that any proposal that would guarantee benefits and rely on enhanced rates of return on individual accounts to finance long- term solvency may create an additional draw on general revenue that could serve to increase the deficit over the long term. Four of the 17 proposals we reviewed for this report included this type of provision; 3 of the 4 also provided for unlimited transfers of reallocated general revenue to maintain trust fund solvency. In coming decades, our nation will face a serious long-term fiscal challenge that will put America’s fiscal future at risk. As we have said in our body of work on Social Security, substantive reform of this important program will involve hard choices that will need to modify the program’s underlying commitments for the future. To do this—to achieve the goal of saving Social Security and making it sustainable for the future—reform will need to increase program revenues and/or decrease program expenses. These are the only options. It may well be that some general revenue will be part of reform. If so, this would a major substantive change. Considering both the long-term fiscal situation and the potential implications for Social Security, the use of any general revenue in proposals—reallocated or new—will need to be clearly understood by both policymakers and the general public. Although OCACT’s determination of “sustainable solvency” for a reform plan will remain an important threshold that plans will need to meet, it is not a sufficient benchmark in the context of the long-term fiscal outlook facing the United States. It is also an incomplete metric for comparing and evaluating reform plan financing implications. Given that most recent proposals use reallocated general revenue, a determination of trust fund solvency alone can be especially misleading. By definition a determination of trust fund solvency is not designed to and does not provide any information on how, when, or to what extent a plan is likely to worsen or improve the already daunting future federal fiscal imbalances. Clarity about these broader implications of any proposal will be essential as reform changes are debated. Although raising taxes (payroll or other) or cutting benefits would have tangible consequences for taxpayers and beneficiaries, e.g., less take-home pay or smaller benefit checks, the consequences of transfers from the non- Social Security budget in the form of reallocated general revenue are less likely to be clearly observable. Reallocated general revenue, however, is not without cost. Regardless of how general revenue is provided to Social Security, it must be paid for at some point. The question is when, and by whom. OCACT scoring memos have played and will continue to play an indispensable role in the debate by analyzing how proposals would affect the trust fund’s finances and in more recent years how proposals would affect the federal budget. OCACT’s valuable information and complex analyses could make an even greater contribution if they were more readily accessible to nonexpert users. It would be helpful for OCACT to include near the beginning of each memo a summary of the relative contribution of each provision in a proposal package to trust fund solvency. In addition, OCACT could devise a way to enable policymakers and the public to quickly and accurately grasp how the elements of a proposal, including any general revenue, work together to affect the trust fund and the overall federal budget. This type of presentational change would not require any additional analysis but could greatly facilitate comparison of proposals to one another. We recognize that developing a summary that would be easily accessible to policymakers and the general public and perceived as fair by all participants in the reform debate will present challenges. The elements of reform proposals are likely to continue to evolve, and new formats and analyses may become necessary. We recognize that a balance will need to be struck between standardizing formats and allowing the information provided to continue to evolve with the debate. Nevertheless, a more standardized summary early on could make clear the relative contributions of benefit cuts and increased revenue from payroll and nonpayroll taxes. In addition, it could illuminate any use of general revenue and how use of such revenue is likely to affect the long-term budget outlook. This summary would be a presentational, not analytical, modification with major potential benefits to greater public understanding of proposed changes to this popular program that is important to virtually all Americans. To improve public understanding of proposed changes to Social Security, we recommend that the Commissioner of SSA direct the Office of the Chief Actuary at SSA to include a summary presentation of its analysis in future scoring memoranda that will enable policymakers and the general public to quickly and easily compare Social Security reform proposals especially with respect to proposed use of general revenue and federal budget implications. In written comments (reprinted in app. II) on a draft of this report, SSA suggested that we should direct our recommendation to the Chief Actuary, not to the Commissioner. This change, SSA said, would target the entity that develops the analysis and would also be sensitive to the independence of the Chief Actuary. As SSA stated in its comments, by legal mandate the Chief Actuary does report directly to the Commissioner. Because our recommendation concerns only the presentation of actuarial estimates—not any change in which estimates are developed nor in the analytical work required to develop them—we believe the recommendation as it stands recognizes and is appropriately sensitive to the independence of the Chief Actuary while at the same time reflecting the organizational structure of the Office of the Chief Actuary within SSA. SSA did not explicitly agree or disagree with our recommendation in its comments. In response to the recommendation, SSA noted that recent OCACT memos had added an additional table (“table d”) that, SSA believes, already provides key information on general revenue use in proposals. SSA also expressed the view that a summary table showing the effects on the actuarial deficit of each proposal provision would be helpful. As our report had noted, this table has been included in some memoranda at the request of the proposal’s sponsor. While we agree with SSA that both the technical and summary table it describes add value, we remain of the view that OCACT needs to develop a new table that can clearly and quickly communicate both trust fund effects and federal budget implications of a proposal. Our report acknowledged the value and completeness of OCACT’s analyses including those presented in “table d.” The message of our report was not that OCACT needs to do additional analytic work. Our message was rather that OCACT’s existing analyses need to be summarized and highlighted so that a proposal’s implications for both the trust fund and the federal budget as a whole are immediately clear. Neither of SSA’s two suggestions is fully responsive to this goal. SSA itself noted in its comments that “table d” is “somewhat complicated.” With regard to the summary table showing how each provision affects the actuarial deficit, we agree that this table adds considerable value and can help facilitate certain types of comparisons across plans. It does not, however, make clear how each provision or the proposal as a whole would affect the federal budget. It is this kind of information, now available only to experienced users of OCACT memos, that needs to be made more accessible. Given the long-term fiscal challenge facing the Nation, the reform debate needs to take place not simply in the context of trust fund solvency but also in the larger context of the federal budget as a whole. SSA also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Commissioner of Social Security as well as other interested parties. Copies will also be made available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact Susan Irving at (202) 512-9142 or Barbara Bovbjerg at (202) 512-7215 if you have any questions about this report. Key contributors to this assignment were Jay McTigue, Joseph Applebaum, Jennifer Ashford, Linda Baker, Michael Collins, and Melissa Wolf. To answer the questions in this report, we reviewed relevant historical documents and other literature on Social Security, including GAO reports and testimonies. We undertook a review of the 26 OCACT proposal scoring memos done from 2001 through 2006, ultimately performing an in- depth analysis of 17 of those scoring memos. We eliminated nine of the scoring memos either because they were proposals that were scored in multiple years or because they were not scored by OCACT as able to pay plan benefits in full throughout the 75-year period. Most of the 17 proposals were characterized by OCACT as meeting its definition of “sustainably solvent.” We also reviewed proposal scorings done by Congressional Budget Office (CBO) and met with officials from OCACT and CBO who were responsible for proposal scorings. To enhance our understanding of the relationship between Social Security and the federal budget, we interviewed selected federal budget experts from think tanks and other policy organizations who represented a range of views on reform approaches. Some of these experts were former officials of the Social Security Administration and/or Congressional Budget Office. Our analysis, like the scorings of the Office of the Chief Actuary and recent scorings by CBO, is limited to first order effects of reform changes. Accordingly, second order effects of proposed reforms on the federal budget, such as effects on economic growth, are beyond the scope of this report. This report also does not address the effects of general revenue use on program equity. As discussed in other GAO work, the use of significant amounts of general revenue transfers could change program equity in ways that are difficult to quantify. The majority of the definitions provided here are from the Social Security Administration, The 2006 Annual Report of the Board of Trustees of the Federal Old-Age and Survivors Insurance and Disability Insurance Trust Funds (Washington, D.C.: May 1, 2006) and GAO, A Glossary of Terms Used in the Federal Budget Process, GAO-05-734SP (Washington, D.C.: September 2005). The difference between the summarized income rate and the summarized cost rate over a given valuation period. A negative actuarial balance. Values relating to future trends in certain key factors which affect the balance in the trust funds. Three sets of demographic, economic, and program-specific assumptions are presented in the annual Trustees’ report. Demographic assumptions include fertility, mortality, net immigration, marriage, and divorce. Economic assumptions include unemployment rates, average earnings, inflation, interest rates, and productivity. Program-specific assumptions include retirement patterns, and disability incidence and termination rates. The three sets of assumptions are described as follows: Alternative II is the intermediate set of assumptions, and represents the Trustees’ best estimates of likely future conditions. Alternative I is characterized as a low cost set—it assumes relatively rapid economic growth, low inflation, and favorable (from the standpoint of program financing) demographic conditions. Alternative III is characterized as a high cost set—it assumes relatively slow economic growth, high inflation, and unfavorable (from the standpoint of program financing) demographic conditions. In its estimates of reform proposals, OCACT uses the intermediate set of assumptions, which represents the Trustees’ best estimates of likely future demographic, economic, and program-specific conditions. A Board established by the Social Security Act to oversee the financial operations of the Federal Old-Age and Survivors Insurance Trust Fund and the Federal Disability Insurance Trust Fund. The Board is composed of six members, four of whom serve automatically by virtue of their positions in the federal government: the Secretary of the Treasury, who is the Managing Trustee, the Secretary of Labor, the Secretary of Health and Human Services, and the Commissioner of Social Security. The other two members are appointed by the President to serve as public representatives. Amounts adjusted by the consumer price index (CPI) to the value of the dollar in a particular year. The cost rate for a year is the ratio of the cost of the program to the taxable payroll for the year. In this context, the cost is defined to include scheduled benefit payments, special monthly payments to certain uninsured persons who have 3 or more quarters of coverage (and whose payments are therefore not reimbursable from the General Fund of the Treasury), administrative expenses, net transfers from the trust funds to the Railroad Retirement program under the financial-interchange provisions, and payments for vocational rehabilitation services for disabled beneficiaries; it excludes special monthly payments to certain uninsured persons whose payments are reimbursable from the General Fund of the Treasury, and transfers under the interfund borrowing provisions. Earnings in employment covered by the Old-Age Survivors and Disability Insurance program. “In current dollars” means valued in the prices of the current year. Amounts are expressed in nominal dollars with no adjustment for inflationary changes in the value of the dollar over time. The current dollar value of a good or service is its value in terms of prices current at the time the good or service is acquired or sold. Federal debt owed by the federal government to itself. Most of this debt is held by trust funds, such as Social Security and Medicare. The Office of Management and Budget (OMB) contrasts it to debt held by the public by noting that it is not a current transaction of the government with the public; it is not financed by private saving and thus does not compete with the private sector for available funds in the credit market; and it does not represent an obligation to make payments to the public. That portion of the gross federal debt held outside of the federal government. This includes any federal debt held by individuals, corporations, state or local governments, the Federal Reserve System, and foreign governments and central banks. Debt held by government accounts (intragovernmental debt) is excluded from debt held by the public. Debt held by the public is not the same as public debt or Treasury debt. As reported in the Trustees’ Report and for the purposes of this report, the year in which the OASDI trust fund would become unable to pay currently- scheduled benefits when due because the assets of the fund were exhausted. Funds held by the Treasury of the United States, other than receipts collected for a specific purpose (such as Social Security) and maintained in a separate account for that purpose. For a discussion of how it is defined for the purposes of this report, see page 6. Ratio of income from tax revenues on a liability basis (payroll tax contributions and income from the taxation of scheduled benefits) to the OASDI taxable payroll for the year. An increase in the volume of money and credit relative to available goods, resulting in an increase in the general price level. A payment in exchange for the use of money during a specified period. For the OASDI trust funds, interest rates on new public-debt obligations issuable to federal trust funds are determined monthly. Such rates are set equal to the average market yield on all outstanding marketable U.S. securities not due or callable until after 4 years from the date the rate is determined. The effective interest rate for a trust fund is the ratio of the interest earned by the fund over a given period of time to the average level of assets held by the fund during the period. The effective rate of interest thus represents a measure of the overall average interest earnings on the fund’s portfolio of assets. The next 75 years. Long-range actuarial estimates are made for this period because it is approximately the maximum remaining lifetime of current Social Security participants. The limit or “cap” on covered earnings that are subject to the 12.4 percent payroll tax and that can be used in the benefit formula, thereby limiting the size of taxes and benefits. This “cap” is indexed annually for average wage growth and therefore it changes every year. In 2007, the taxable maximum limit is $97,500. See under Current dollar. The issuance of checks, disbursement of cash, or electronic transfer of funds made to liquidate a federal obligation. Outlays during a fiscal year may be for payment of obligations incurred in prior years (prior-year obligations) or in the same year. Outlays, therefore, flow in part from unexpended balances of prior-year budgetary resources and in part from budgetary resources provided for the year in which the money is spent. Total government outlays include outlays of off-budget federal entities, such as the Social Security trust fund. A financing method where taxes are scheduled to produce just as much income as required to pay current benefits, with trust fund assets built up only to the extent needed to prevent exhaustion of the fund by random economic fluctuations. A tax levied on the gross wages of workers. The equivalent value, at the present time, of a future stream of payments (either income or cost). The present value of a future stream of payments may be thought of as the lump-sum amount that, if invested today, together with interest earnings would be just enough to meet each of the payments as they fell due. Present values are widely used in calculations involving financial transactions over long periods of time to account for the time value of money (interest). For the purpose of present-value calculations for this report, values are discounted by the effective yield on trust fund assets. A program is solvent at a point in time if it is able to pay scheduled benefits when due with scheduled financing. For example, the OASDI program is considered solvent over any period for which the trust funds maintain a positive balance throughout the period. The difference between the summarized cost rate and the summarized income rate, expressed as a percentage of taxable payroll. The ratio of the present value of cost to the present value of the taxable payroll for the years in a given period, expressed as a percentage. This percentage can be used as a measure of the relative level of cost during the period in question. For purposes of evaluating the financial adequacy of the program, the summarized cost rate is adjusted to include the cost of reaching and maintaining a target trust fund level. Because a trust fund level of about 1 year’s cost is considered to be an adequate reserve for unforeseen contingencies, the targeted trust fund ratio used in determining summarized cost rates is 100 percent of annual cost. Accordingly, the adjusted summarized cost rate is equal to the ratio of (a) the sum of the present value of the cost during the period plus the present value of the targeted ending trust fund level, to (b) the present value of the taxable payroll during the projection period. The ratio of the present value of scheduled tax income to the present value of taxable payroll for the years in a given period, expressed as a percentage. This percentage can be used as a measure of the relative level of income during the period in question. For purposes of evaluating the financial adequacy of the program, the summarized income rate is adjusted to include assets on hand at the beginning of the period. Accordingly, the adjusted summarized income rate equals the ratio of (a) the sum of the trust fund balance at the beginning of the period plus the present value of the total income from taxes during the period, to (b) the present value of the taxable payroll for the years in the period. As defined by OCACT, sustainable solvency for the financing of the program is achieved when the program has positive trust fund ratios throughout the 75-year projection period and these ratios are stable or rising at the end of the period. Wages and/or self-employment income, in employment covered by the OASDI and/or Hospital Insurance (HI) programs, that is under the applicable annual maximum taxable limit. For 1994 and later, no maximum taxable limit applies to the HI program. A weighted average of taxable wages and taxable self-employment income. When multiplied by the combined employee-employer tax rate, it yields the total amount of taxes incurred by employees, employers, and the self- employed for work during the period. See under “Taxable earnings.” As discussed in this report, the OASDI trust funds are separate accounts in the United States Treasury in which are deposited the taxes received under the Federal Insurance Contributions Act and the Self-Employment Contributions Act, as well as taxes resulting from coverage of state and local government employees; any sums received under the financial interchange with the railroad retirement account; voluntary hospital and medical insurance premiums; and transfers of Federal general revenues. Funds not withdrawn for current monthly or service benefits, the financial interchange, and administrative expenses are invested in interest-bearing federal securities, as required by law; the interest earned is also deposited in the trust funds. Old-Age and Survivors Insurance (OASI). The trust fund used for paying monthly benefits to retired-worker (old-age) beneficiaries and their spouses and children and to survivors of deceased insured workers. Disability Insurance (DI). The trust fund used for paying monthly benefits to disabled-worker beneficiaries and their spouses and children and for providing rehabilitation services to the disabled. Hospital Insurance (HI). The trust fund used for paying part of the costs of inpatient hospital services and related care for aged and disabled individuals who meet the eligibility requirements. Also known as Medicare Part A. A measure of the adequacy of the trust fund level. Defined as the assets at the beginning of the year expressed as a percentage of the cost during the year. The trust fund ratio represents the proportion of a year’s cost which could be paid with the funds available at the beginning of the year. Under budget concepts set forth in the Report of the President’s Commission on Budget Concepts, a comprehensive budget in which receipts and outlays from federal and trust funds are consolidated. When these fund groups are consolidated to display budget totals, transactions that are outlays of one fund group for payment to the other fund group (that is, interfund transactions) are deducted to avoid double counting. The unified budget should, as conceived by the President’s Commission, take in the full range of federal activities. By law, budget authority, outlays, and receipts of off-budget programs (currently only the Postal Service and Social Security) are excluded from the current budget, but data relating to off-budget programs are displayed in the budget documents. However, the most prominent total in the budget is the unified total, which is the sum of the on- and off-budget totals.
Absent reform, Social Security's financing gap will grow until currently scheduled benefits can no longer be paid in full. Recent reform proposals often include general revenue (GR)--a major change that can have significant implications for the budget as a whole. This report addresses these issues: (1) What information is available about GR in recent proposal scorings by Social Security's Office of the Chief Actuary (OCACT)? (2) What common mechanisms, especially GR mechanisms, are used to increase program revenue? (3) What are the implications of GR for the trust fund and the federal budget? We have prepared this report under the Comptroller General's statutory authority to conduct evaluations on his own initiative as part of a continued effort to assist Congress in addressing the challenges facing Social Security. Although focused on the trust fund, OCACT scoring memos are also the primary source of information on how proposals would impact the federal budget. Memos provide information on GR use and its effects, but experts said comparing proposals on this element presents challenges, requiring extensive efforts to understand complex tables shown at the end of the memos. Fourteen of 17 proposals GAO reviewed provided GR (1) as needed to maintain trust fund solvency or (2) as specified by formula, amount, or source. Nine of the 17 achieved "sustainable solvency" under OCACT's definition using the first approach. This type of unlimited transfer poses the greatest potential risk to the federal budget, especially when combined with benefit guarantees. In proposals reviewed, amounts of GR under both types of approaches ranged up to about twice program shortfall. In all proposals using GR, the GR was reallocated from the non-Social Security budget. While any additional revenue to the trust fund will help solvency, unified federal budget effects depend on the type of revenue--whether it is new revenue (additional payroll tax revenue or GR that is new to the federal budget) versus reallocated GR. Absent other changes, new revenue would improve the long-term fiscal imbalance while reallocated GR would do nothing to address it. Although raising taxes (payroll or other) or cutting benefits would have tangible consequences for taxpayers and beneficiaries, e.g., less take-home pay or smaller benefit checks, the consequences of transfers from the non--Social Security budget in the form of reallocated GR are less likely to be clearly observable. Reallocated GR, however, is not free. Regardless of how GR is provided to Social Security, it must be paid for at some point. The question is when, and by whom.
The quality of life in urban areas is and will continue to be significantly affected by decisions on the use of federal transportation funds. Key urban issues, such as traffic congestion, air pollution, and the economic viability of neighborhoods and commercial areas, are significantly affected by the decisions on how these funds are spent. The decisions, in turn, grow out of the urban transportation planning process and the role of the nation’s 339 metropolitan planning organizations (MPO). Since the early 1970s, MPOs have been significant players in urban transportation planning. An MPO is not a discrete decision-making body with real jurisdictional powers, such as a city or county government. Instead, an MPO is best viewed as a consortium of governments and other bodies—such as transit agencies and citizens groups—that join together for cooperative transportation planning. An MPO’s organization and membership often consists of (1) a policy-making board involving elected officials from the local governments in the metropolitan area; (2) a technical committee consisting of professional staff of local, state, and federal transportation agencies; and (3) an MPO staff. The MPO’s primary mission is to develop a consensus on a long-term transportation plan for an urban area and to develop a transportation improvement program (TIP) that identifies projects to implement the plan. How each of the 339 MPOs in the United States fulfills this mission depends on its relationship with the state department of transportation and other transportation operators, the number of local governments in the region, the size and experience of the MPO staff, the growth rate of the population, and the number of transportation modes in the region. According to a 1995 report on MPOs by the U.S. Advisory Commission on Intergovernmental Relations (ACIR), some MPO-like organizations existed in the 1950s to prepare special metropolitan planning studies in Chicago, Detroit, New York, and Philadelphia. In 1970, federal policy fostered the development of comprehensive urban transportation planning by requiring the creation of planning agencies in areas with populations of 50,000 or greater to carry out cooperative planning at the metropolitan level. Originally, all MPOs were treated alike under federal laws and regulations. In the mid-1980s, when funding for metropolitan planning was reduced, preference for funding was given to those MPOs in metropolitan areas over 200,000 in population, areas now known as Transportation Management Areas (TMA). ISTEA’s funding provisions also provided additional discretion and funding to those MPOs located in areas violating the federal air quality standards. ISTEA established the Congestion Mitigation and Air Quality program (CMAQ) and authorized $6 billion over 6 years to help the areas not in attainment with the air quality standards (nonattainment areas) reach compliance with the Clean Air Act’s (CAA) requirements. With CMAQ funds, the MPOs located in the areas that are not in compliance with the federal standards for ozone or carbon monoxide emissions can approve projects that help control or reduce these emissions. The population and geographic area covered by the MPOs also determine the breadth of their responsibilities and the support they have to meet their ISTEA planning requirements. Some MPOs, such as those in New York, Chicago, and Los Angeles, plan for urbanized populations of over 6 million. Typically, these MPOs are well financed and have a dedicated professional staff of 100 or more. At the other extreme, the MPOs that plan for urban areas with populations just over 50,000 may have no staff or a single county government employee working part time for the MPO. In addition, the MPOs’ planning duties can be complicated by the boundaries of jurisdictions in metropolitan areas. As growth occurs, urbanized areas sometimes overrun the MPOs’ boundaries or become so large that state and local officials establish more than one MPO to serve the area. Currently, 14 contiguous urbanized areas within a single state have two or more MPOs. In these locations, such as Florida’s Tampa Bay area, cooperation and coordination among the MPOs are essential. Other urban areas cross state lines. For example, the Philadelphia MPO plans for the Pennsylvania and New Jersey portions of the Philadelphia urban area, and the St. Louis MPO plans for the Missouri and Illinois portions of the urban area. The task of these MPOs is complicated by their having to deal with two or more state governments and more than one Federal Highway Administration (FHWA) or Federal Transit Administration (FTA) region. The ACIR report noted that ISTEA brought three new, far-reaching philosophies to the administration of the federal surface transportation programs: (1) the decentralization of decision-making to the state and local governments, and particularly to the MPOs in the larger metropolitan areas with populations of 200,000 or more; (2) stronger environmental connections, especially to the CAA; and (3) the elevation of nontraditional goals and stakeholders to new prominence in the planning and decision-making processes. ACIR noted that the decentralization of decisions gave many MPOs a larger area to plan for, more miles of road to make decisions about, more flexibility to consider alternatives to the automobile, a lead role in allocating certain federal transportation funds, a longer horizon to consider for the planning process, and a responsibility to consider many transportation-related public policies. In 129 urban areas with populations greater than 200,000,—the TMAs—ISTEA gives the MPOs the authority to select projects from the TIP, in consultation with the state. In other areas, the selection of projects is to be carried out by the state in cooperation with the MPO. Environmental considerations have become more of a driving force in the MPOs’ work as well. The MPOs in nonattainment areas must develop transportation plans that ensure that the CAA’s requirements are met. In constraining the transportation plans to meet the CAA’s goals, the MPOs cannot, with limited exceptions, spend any federal funds on any highway projects that will exacerbate existing air quality problems or lead to new violations of federal air quality standards. The MPO-developed transportation plans must contribute to reducing motor vehicle emissions. The elevation of nontraditional goals and stakeholders in the MPO planning process is specified in the ISTEA section that requires the MPOs to consider 16 factors when developing their metropolitan plans. Some of the planning factors require planners to consider the effects of transportation policies on land-use development; the social, economic, energy, and environmental impacts of transportation decisions; provide for the efficient movement of freight; and ensure connections with international borders, ports, and airports and intermodal facilities. These planning factors address many of the ways that transportation relates to other values and the unintended impact of transportation and transportation facilities. ISTEA stated that these factors must be considered as part of the planning process. In addition, ISTEA and subsequent planning regulations emphasized an early and continuous effort to involve citizens that actively seeks input from direct stakeholders and other members of the public, including those traditionally underserved by the existing transportation systems. The public’s involvement is to be sought at various points in the planning process, including the development of the plan, the TIP, and individual projects. Taking into consideration all of the relevant requirements of ISTEA and the CAA, the MPOs must develop two basic planning documents—the transportation plan and the transportation improvement program. The first document—the transportation plan—is a long-term document that specifies a 20-year vision for a metropolitan area’s transportation system. The plan is to include short- and long-range strategies leading to the development of an integrated and efficient intermodal transportation system. The plan is to be revised and updated at least every 3 years in those areas not meeting the federal air quality standards and at least once every 5 years in other areas. An acceptable plan must be a realistic, implementable document describing how the transportation system will serve metropolitan development objectives, address congestion and air quality concerns, and address other issues. The TIP is a much more detailed document that specifies a list of priority projects to be implemented in each year covered. It must include all transportation projects that will receive federal transportation funding and be clearly based on the objectives laid out in the plan. The TIP covers a period of at least 3 years and must be updated every 2 years. After approval by the governor, the metropolitan TIP must be included in the state TIP, which is then subject to review and approval by the Federal Highway Administration (FHWA) and the Federal Transit Administration (FTA). ISTEA specifies that the plans and TIPs include a financial component that demonstrates how the plans will be funded and implemented. The TIP must be financially constrained each year and must include only those projects for which funding has been identified using current or reasonably available revenue sources. The state and the transit operators must provide information early in the process of developing the TIP about the amount of federal, state, and other funds likely to be available. This financial constraint requirement was a major change in federal policy. Before ISTEA, long-range plans and TIPs were often lengthy “wish lists” of projects proposed by local governments, transit operators, and others. Because such plans and programs bore no relation to the available financial resources, many projects were never implemented. Hence, the real implementation decisions took place outside of the formal planning process. Thus, under ISTEA the financial constraint requirement ensures that the implementation decisions come directly from a systematic planning process. Concerned about the abilities of the MPOs to meet the demands of ISTEA’s planning requirements, the Chairman and the Ranking Minority Member, Senate Committee on Environment and Public Works, and the Chairman of that Committee’s Subcommittee on Transportation and Infrastructure requested us to determine the challenges that the MPOs face in implementing ISTEA’s metropolitan planning requirements. Specifically, this report (1) discusses the MPOs’ experiences in implementing ISTEA’s planning requirements and (2) examines the extent to which the U.S. Department of Transportation’s certification review process ensures that the MPOs in larger urban areas comply with ISTEA’s requirements. To assess the challenges that the MPOs faced in meeting ISTEA’s metropolitan planning requirements, we reviewed numerous surveys, reports, conference summaries, and other literature on urban transportation planning that have been published since 1991. In addition, we spoke to representatives of FHWA, FTA, and other national experts. We also obtained and analyzed the results of a 1994 nationwide survey of all MPOs in the United States conducted by the National Association of Regional Councils (NARC). On the basis of these efforts, we determined that three of ISTEA’s planning provisions—(1) the requirements for involvement by citizens in developing plans and programs, (2) financially constraining the transportation improvement program, and (3) project identification—were particularly challenging for the MPOs. To further explore these key issues, we conducted in-depth telephone interviews with officials of 13 MPOs and 11 state transportation planning agencies. These organizations are listed in appendix II. The MPOs we selected included those that had great or little difficulty with planning requirements (on the basis of their responses to the NARC survey) and represented different regions in the United States. All but 1 of the 13 MPOs we interviewed represent urban areas with populations of 200,000 or greater—the transportation management areas. With each MPO, we discussed why it did or did not have difficulty with selected planning requirements, the reasons for the difficulty or lack of it, the benefits and drawbacks of the planning requirement, and whether the Congress should reconsider these or any other of ISTEA’s planning requirements. To determine whether DOT’s certification review process was ensuring that MPOs comply with planning requirements, we obtained and reviewed DOT’s guidance for field staff conducting the reviews and discussed with FHWA and FTA officials the rationale behind DOT’s approach to the reviews. We also obtained copies of the 55 certification reports published through January 5, 1996, and reviewed and analyzed their contents. Finally, we spoke to selected MPOs and states about their views on the advantages and drawbacks of the certification process. We performed our work from August 1995 through July 1996 in accordance with generally accepted government auditing standards. After providing a draft of this report to DOT for review and comment, we met with DOT officials, including the Chief, Metropolitan Planning Division, Federal Highway Administration, and the Chief, Statewide Planning Division, Federal Transit Administration. Where necessary, we modified the report to address their comments and suggestions. Three of ISTEA’s key planning requirements—for extensive public involvement in planning and programming, for the financial constraint of TIPs, and for the MPOs’ authority to select projects—posed significant challenges. Despite these challenges, the MPOs we interviewed believe that their efforts to meet these requirements have been beneficial. Furthermore, both the MPOs we interviewed and the national organization representing MPOs support continuing these three provisions. The state transportation planning officials we interviewed were less unanimously supportive of these provisions, and the American Association of State Highway and Transportation Officials (AASHTO) advocates eliminating the requirement to financially constrain the long-term transportation plans. ISTEA’s requirements for extensive involvement by members of the general public in the transportation planning process required considerable changes at many of the nation’s MPOs. The public participation requirement has challenged the MPOs to expand the resources devoted to involving citizens and apply more effective techniques for soliciting public input. Despite the initial challenges, all 13 MPOs we interviewed believed that ISTEA’s requirements were desirable and beneficial to the planning process. According to the MPOs we spoke to, effective public outreach serves to inform the public of key regional transportation issues, helps ensure that programs contain projects truly needed by the public, and identifies “problem” projects early in the planning process. According to the MPOs and states we interviewed, changes to this requirement, if any, should ensure that the MPOs have sufficient flexibility to develop those programs best suited to their local areas. According to DOT’s guidance, ISTEA intended that the MPOs’ efforts to involve citizens would lead to transportation plans and programs that are more reflective of a community’s mobility and accessibility needs and more cognizant of the broader issues, such as the effects of transportation investments on the environment, urban neighborhoods, and the general quality of life. The efforts to involve citizens were to include an open exchange of information and ideas between transportation decision makers and the public, including all individuals and groups potentially affected by transportation decisions. Such efforts were to occur at various stages of the transportation planning process, including the development of the long-term plan, the TIP, and individual projects. At the outset of ISTEA, the MPOs’ ability to meet the act’s public involvement requirements was in doubt. A 1992 study commissioned by DOT noted that public participation in transportation planning had been relatively narrow and of low visibility, except for critical episodes when contentious issues arose. The urban areas that did have extensive public participation efforts before ISTEA were those that had active civic cultures. The 1995 ACIR report found that participation by the public is one of the areas emphasized by ISTEA in which the MPOs need the most assistance. DOT’s regulations also note that an effective effort to involve citizens requires the MPOs to provide the public with timely and relevant information on transportation planning, full public access and input to key decisions, and opportunities for the public’s early and continuing involvement. These requirements have been challenging to the MPOs for a number of reasons. Specifically, we found that ISTEA’s requirement for involving the public challenges the MPOs to (1) significantly expand the resources devoted to that involvement, (2) develop new methods for soliciting public input, and (3) effectively use the results of their efforts to involve the public. First, the efforts to involve citizens required greater resources than the MPOs may have been devoting. A 1994 planners manual found that effective involvement by the public would require not only greater commitment from MPO managers and public officials, but also significant postage and publication budgets and more staff time than most MPOs would likely expect. Our interviews with the MPOs and the states clearly bore this out. Eleven of the 13 MPOs we interviewed told us they had expanded their efforts to involve citizens since ISTEA, and 7 of them said that the need for additional resources was a challenge. Typically, the MPOs told us that while they had made some limited efforts to involve the public before ISTEA, these were often cursory. For example, the St. Louis MPO’s effort grew from a standing citizens committee into a multifaceted program to involve more people. This MPO’s efforts to inform and educate the public now include transportation issue papers distributed to target audiences, public speaking engagements before community groups via a speakers bureau, press releases on topical transportation-related issues, and articles in MPO periodicals. The efforts to obtain input from the public include public meetings, smaller focus groups, surveys, and project solicitations. Similarly, an official of the Philadelphia MPO told us that the MPO has tripled its spending on involvement by the public—from $90,000 to about $300,000 annually—and now has two full-time staffers exclusively devoted to the effort. Second, the development and implementation of programs to involve the public may call for knowledge and skills that may not have been readily available to MPOs at the outset of ISTEA. The 1995 ACIR report also found that the MPOs needed research on the techniques that will encourage citizens’ participation, especially those techniques that have been successful in highly populated areas, and the services of experts trained in such techniques. The report found that the MPOs needed to be more sophisticated in using the media to build support from the public. These issues also arose in our interviews with the MPOs and the states. In open-ended discussions, 4 of the 13 MPOs noted the difficulty presented by selecting and implementing the appropriate techniques for involving the public. For example, an official of the St. Louis MPO told us that identifying the best method is the biggest problem the MPO faces in its attempts to involve the public. The official added that the problem is an ongoing one, as the public response to individual techniques seems to diminish over time. The Springfield, Massachusetts, MPO noted that in developing transportation newsletters, simply translating the planners’ technical jargon into readable language for the general public is a large task. The MPO has hired a specialist to assist with this effort. Such technical assistance may be key for many MPOs—the Milwaukee MPO, which did not have much difficulty with ISTEA’s requirements for involving citizens, credited technical assistance from the University of Wisconsin’s extension service as a significant factor in the program’s success. Finally, the MPOs must determine how input from the process of involving the public will influence plans and programs. Nearly all of the MPOs we interviewed found it difficult to get the general public interested and involved in transportation planning issues. These MPOs noted that, typically, “John Q. Public” will become interested in transportation planning only if a specific project will affect his well-being. He may get very involved, for example, if he believes that a road-widening project will increase the traffic near his home and hence harm the value of his property. As a result, the public’s input generally may not reflect the views of a cross-section of the general public. Several MPOs said that getting input from lower-income and minority communities is particularly challenging. On the other hand, certain interest groups, often with a narrowly defined agenda, may be very active in commenting on the transportation planning process. As a result, the interest of activists with specific agendas may dominate the process of involving the public. One MPO official noted that citizens’ involvement has given professional groups a vehicle for expressing their views and dominating the public discussion. In putting together plans and programs, the MPOs must balance the input of activists with the transportation needs of the broader public. Despite the difficulties and imperfections inherent in the efforts to involve the public, all of the MPOs we interviewed believe that effective involvement by the public is critical to good planning. All 13 MPOs noted that their efforts to meet ISTEA’s requirements for involving the public have resulted in plans and programs that are more reflective of the public’s transportation needs and hence enjoy broader and stronger public support. Also, citizens’ latent opposition to projects is uncovered much earlier in the planning process. For example, the Durham, North Carolina, MPO told us of a project that would widen a four-lane road to eight lanes. All of the technical analyses supported the need for this project, but the MPO ran into significant public opposition as the construction phase neared. The project was delayed for over a year, which, according to the MPO official, might well have been avoided if the public’s input had been sought earlier in the planning process. For the reasons outlined above, the 13 MPO officials we spoke to unanimously supported the continuation of the requirement for involving the public in transportation planning. However, MPO and state planning officials emphasized the importance of flexibility in selecting the appropriate techniques for inviting citizens’ input and the concomitant importance of avoiding overly prescriptive federal regulations. For example, a Florida state department of transportation official stated that techniques that work well for communities in Florida’s panhandle may be ineffective in the Hispanic and Caribbean communities of south Florida. An official at the St. Louis MPO stated that any one technique for involving the public has a relatively short shelf life, with diminishing returns over time. Hence, it is important to vary techniques—such as surveys, public meeting, focus groups, and so on—over time. Financially constraining TIPs—the 3-year plan—was a new requirement for many MPOs. A 1994 planner’s guide noted that prior to ISTEA, many TIPs were laden with more projects than could be afforded and that bringing these TIPs into balance was politically painful. Also, successfully constraining a TIP requires reliable projections of revenue—projections that were not always available. Despite these difficulties, all but two of the MPOs we spoke to had developed financially constrained TIPs, and all MPOs believed that the practice was critical to meaningful short-term planning. As the requirement has forced a realization of limited resources, it has encouraged planners to explore other options for local and regional financing. The MPOs we interviewed all supported continuing the TIP constraint in ISTEA. ISTEA requires MPOs to ensure that their TIPs include a ranked list of projects and a financial plan that demonstrates how the program can be implemented with reasonably available resources. For example, a TIP featuring $10 million in highway and transit improvements would have to show that these projects could be paid for with federal, state, local, or other funds that were demonstrably available. This requirement was a significant change to federal planning requirements. According to the National Association of Regional Councils (NARC), before ISTEA, there were pressures to include as many projects as possible in the TIP, regardless of the cost. Consequently, proposed transportation spending was sometimes more an outcome of political influence than of a rational planning process. NARC noted that by ensuring that planners develop and limit investment programs on the basis of realistic budgets, transportation spending would be a rational outcome of the planning process. The MPOs and states we interviewed stated that the requirement to financially constrain TIPs is one of the most challenging of ISTEA’s planning requirements. Because many MPOs had not financially constrained TIPs before ISTEA, both their technical ability to develop financial plans and their institutional wherewithal to exclude projects not falling within the budget were in doubt at the outset of ISTEA. A nationwide survey of MPOs conducted by the National Association of Regional Councils found that financially constraining the TIP was the most difficult of eight selected ISTEA planning requirements. Our interviews with the MPOs and the states, as well as other studies of MPOs under ISTEA, reveal that the financial constraint requirement presented the MPOs with two main challenges. First, the MPOs had to develop a regional consensus as to which programs would be on the TIP. Second, the MPOs had to obtain reliable estimates of the funds available from the state departments of transportation. Because a financially constrained TIP is a defined and realistic program of transportation spending, it must be based on a regional consensus about which projects are best suited to meet a region’s transportation needs. Highways, mass transit, and other projects can be proposed by many entities, including the state, cities, counties, transit agencies, and community groups. The financial constraint requirement forces policy-makers to consider trade-offs and make choices among these alternative transportation investments. In open-ended discussions, 6 of the 13 MPOs that we interviewed noted the difficulties involved in reaching such a consensus. For example, the Atlanta MPO noted that its 1992 TIP contained about four times as many projects as could be paid for with reasonably available resources. To bring the TIP into balance, it had deleted about $400 million worth of planned projects by 1993. This action did not please the sponsors of deleted projects, although many projects had scant chance of implementation. Similarly, the MPO for Dallas/Ft. Worth noted that the MPO and the state department of transportation had a significant dispute because a freeway improvement advocated by the state was not included in the financially constrained TIP. A reliable estimate of available revenues is indispensable in financially constraining the TIP. Because much of the funding for urban transportation—both state and federal—comes from the state departments of transportation, the MPOs depend on their states to provide guidance on the financial resources that can reasonably be expected to be available during the TIP period. Most MPOs either did not raise this issue or told us that the state departments of transportation have been cooperative and have provided financial estimates with reasonable timeliness. However 3 of the 13 said that the states’ lack of willingness to provide reliable estimates of the available revenues has been a hurdle in developing financially constrained TIPs. At two MPOs, the inability to obtain reliable financial information was the center of disputes between the MPO and the state department of transportation about the ability of the MPO to select projects. For example, officials of one MPO told us that the state department of transportation did not provide estimates of the available funds, except in the form of draft state TIPs. In essence, the MPO said that the state had refused to provide any estimates of the future revenues that the MPO could use to develop a local TIP. Another MPO told us that it had submitted a TIP that was financially constrained on the basis of the revenue estimates provided by the state. The TIP was included in the state’s transportation improvement program, which was subsequently rejected by FHWA/FTA because the state’s revenue assumptions included a drawdown of its unobligated balances, which is not possible without congressional action. As a result, the MPO had to develop a revised TIP with about one-third the resources of the original TIP. The state’s action and the subsequent rejection of the TIP created considerable resentment among the local officials and project sponsors in the region. Twelve of the 13 MPOs we interviewed told us they had developed financially constrained TIPs under ISTEA. Furthermore, all of the MPOs we spoke to unanimously supported the continuation of the requirement to financially constrain the TIP, as did 7 of the 11 state transportation offices we interviewed. All of the MPOs we spoke to noted that the financial constraint requirement forces the development of TIPs that include the projects that will be implemented. Officials of the New Orleans MPO, for example, told us that before ISTEA, the system of selecting and implementing transportation projects had broken down. There was little sense of real priority in the TIP. Because the TIP is now financially constrained, its credibility and “implementability” are significantly enhanced, and the priorities spelled out in the TIP now drive investments. Similarly, an Atlanta MPO official told us that the commitment to the projects on the TIP is much greater because the TIP is now a firm program of transportation investment priorities. In addition to establishing a meaningful program of projects, the financial constraint requirement has led to tangential benefits. Many MPOs said that the financial constraint requirement has forced regional elected officials to realize the gap between transportation needs and reasonably available revenues. As a result, regional policy-makers are examining other revenue- raising measures, including innovative financing mechanisms. For example, the staff of the Pensacola, Florida, MPO told us that the regional policy-makers were considering establishing a toll authority for that fast-growing region. Also, several MPOs noted that the financial constraint requirement is indispensable in giving the MPOs real authority to select projects. By financially constraining TIPs, the MPO produces a ranked list of projects that will drive transportation investments. The comments we received from MPOs about the financial constraint requirement for the long-term plan to some extent paralleled those we received about the TIP requirement. However, some MPOs and states noted that financially constraining long-range planning is particularly difficult because obtaining reliable estimates of the available resources for a 20-year period is impossible. As a result, some states and MPOs said that they have had to apply the constraint on the basis of current resources, which limits the vision of the long-term plan. As several MPO and state representatives explained, new revenue sources that the MPOs could use over a 20-year period are not easily identified at the time the plan is developed. As a result, the long-term plan may be much more conservative than it needs to be. Several MPOs have found a way around this dilemma. Three MPOs that we interviewed said that they developed two long-term plans—a constrained plan for the federal requirement and an unconstrained, or “visionary,” plan to outline a more extensive transportation agenda for the region. ISTEA required that the MPOs—and by extension, the regional interests—in the larger urban areas have a greater influence on transportation investment decisions than other transportation planners. Key wording in ISTEA gives the MPOs in the larger urban areas substantial influence on identifying projects to be included in transportation programs as well as on the projects selected from the programs. These MPOs are responsible for identifying all projects for implementation, except projects under the National Highway System and the Bridge and Interstate Maintenance programs. While there was uncertainty about the MPOs’ ability to take on this decision-making authority at the outset of ISTEA, the MPOs and states we interviewed believe that ISTEA has enhanced the MPOs’ authority to select projects. While this enhanced authority was attributed to various provisions of ISTEA, a cooperative and constructive working relationship with the state was essential. ISTEA requires that the MPOs in the larger urban areas—those with populations of 200,000 or more—take on a significantly larger role in identifying transportation projects to meet the regions’ mobility needs. Before ISTEA, the MPOs were generally seen as entities that were outside of the decision-making process; they developed lists of projects but deferred real decision-making authority to the state transportation agencies. According to the 1995 NARC study, ISTEA stressed that the MPOs be transformed from weak advisory bodies into strong decision-making partners working more closely and on an equal footing with the state transportation agencies and other key stakeholders. The MPOs were to play a pivotal role in planning as leaders, managers, and builders of consensus among other agencies that may have different perspectives and priorities. As a result, transportation decisions—that is, project identification—would be an outgrowth of a regionally based process and hence better meet the regions’ mobility needs. At the outset of the ISTEA era, the capacity of the MPOs to assume this leadership/decision-making role was in question. The MPOs were not traditionally strong decision-making bodies, and federal policy had de-emphasized urban transportation planning during the 1980s. As a result, the planning capacity of many MPOs deteriorated during this time. As the Institute of Public Administration noted in 1992, the MPOs’ budgets, functions, staffs, and technical capacities dwindled during the 1980s. Perhaps as a result, DOT analysts conducting comprehensive planning reviews between 1991 and 1993 found that important metropolitan planning and programming decisions were determined primarily by the states or by transit operators. The MPOs were generally not assuming a decision-making role. At the start of the ISTEA era, therefore, the MPOs needed to strengthen their ability to forge consensus on both project financing priorities and the development of TIPs. In our interviews, we found that political and institutional factors—that is, an MPO’s working relationship with the state department(s) of transportation, regional transit agencies, and local governments—were the key difficulty in the MPOs’ assuming the authority for selecting projects. Six of the 13 MPOs we spoke to noted that forging a consensus among the disparate interests in the metropolitan area was a challenge. For example, the Atlanta MPO said that it was very difficult to get all the relevant parties—the state, the local government, the transit agencies, and so on—working together to develop a unified TIP. While the pre-ISTEA TIP was not really a document that drove investment decisions, the participants perceived that under ISTEA, the development of the TIP would have a real and lasting impact. It was clear from our discussions with MPOs that a cooperative and constructive relationship with the state departments of transportation is essential in expanding the MPOs’ authority. Nine of the 12 large MPOs we interviewed said that the states had facilitated the MPOs’ project identification, although in some cases several years passed before a constructive working relationship developed. For example, a representative of the St. Louis MPO said that the Missouri department of transportation was not at first cooperative with the MPO’s effort to assume more decision-making authority. More recently, however, the MPO and the state have signed a memorandum of agreement spelling out the agencies’ respective roles and recognizing the more prominent role the MPO will play in selecting projects. Two MPOs said that the states continue to resist the MPOs’ and regional interests’ efforts to assume greater authority over project identification. In both cases, the difficulties were rooted in the fundamental disagreements between the MPO and local officials on the one hand and the MPO and the state government on the other hand about the appropriate level of the MPO’s and the local government’s influence on the development of the TIP. One MPO said that the state’s TIP process did not allow the MPO to participate fully in the process of selecting projects. For example, the state had limited certain federal funds for pedestrian projects in a manner that the MPO believed was inconsistent with ISTEA. An official of the state department of transportation told us that it gets extensive input and advice from the MPO and other regional interests in determining the projects to be included in the state’s plans. However, the state agency is opposed to suballocating federal and state transportation funds to the MPOs. At the other MPO, we found that by dominating the voting power on the MPO’s decision-making body, the state transportation department was in effect the MPO. As a result, the voice of municipal governments and other regional interests were not effectively represented in developing TIPs. Most MPOs we interviewed—8 of 12—said that ISTEA had a great or very great impact on their authority to select projects. Their comments revealed that no single provision of ISTEA can be credited with this change. As table 2.1 reveals, several of ISTEA’s provisions have contributed to this change. For example, ISTEA states that projects in urban areas with populations of 200,000 or greater shall be selected by the MPO in consultation with the state, except projects under the National Highway System and the Bridge and Interstate Maintenance programs. The MPOs typically stated that this provision had some impact but was mainly symbolic. For example, one official told us that the selection of projects from a financially constrained TIP was little more than an administrative sign-off. Of much greater significance was the development of a financially constrained TIP. As an official of the Albany, New York, MPO explained, all of the projects in a financially constrained TIP are intended for implementation; consequently, the development of the TIP is the real decision point for project identification. Four of the 12 large MPOs that we interviewed said that ISTEA had only little or some influence on their authority to select projects. Two of these noted that their influence increased only minimally after ISTEA because they had an acceptable level of influence before ISTEA. For example, the Milwaukee MPO told us that it has long had a constructive working relationship with the Wisconsin Department of Transportation. Although the MPO noted that ISTEA had some impact on its authority, it said that it did not just wrest authority from the state and present its decisions as a fait accompli; a cooperative working relationship with the state was critical. As discussed above, two other MPOs had different experiences. Despite the range of views on ISTEA’s impact, the MPOs we interviewed unanimously supported both the ISTEA language that delegates the authority to select projects to larger MPOs and the other provisions that have enhanced the MPOs’ authority. MPOs and states to some extent have differing views on continuing ISTEA’s planning provisions. While the MPOs we interviewed unanimously endorsed the continuation of the public participation, financial constraint, and project selection requirements, some states opposed the continuation of these requirements. Furthermore, AASHTO and the Association of Metropolitan Planning Organizations (AMPO) have taken differing views. As table 2.2 indicates, AASHTO and AMPO have differing positions on continuing certain planning provisions of ISTEA. AMPO cited ISTEA’s requirements for involving the public as a model piece of legislation for ensuring broad-based involvement by citizens and local elected officials. While noting the benefits of involving the public, AASHTO stated that the regulations on such involvement are too detailed and prescriptive. It emphasized state and local flexibility in developing the process of involving the public. It also noted that the detailed requirements in federal regulations and guidance can lead to substantial delays on projects and to court challenges. Nearly all the state officials we interviewed supported the continuation of the requirements to involve the public that are contained in the legislation. However, as noted earlier, some states also expressed concern about the impacts of overly prescriptive regulations. According to AMPO’s policy statement, ISTEA’s requirements for financially constrained plans and programs are consistent with sound business practices and strongly supports the continuation of the requirements. AASHTO’s states that in financially constraining TIPs, MPOs should have the flexibility to program at a level that enables them to deal with the uncertainty of project schedules and with fluctuating levels of federal funding. State officials expressed similar concerns. Four of the 11 state planning officials we contacted opposed the retention of this requirement. While they support the principle of financially constraining the TIP, they believe that the regulatory interpretation is too strict. Three of the four stated that the planning regulations should allow some over-programming. As one MPO explained, delays are inevitable on some projects because of environmental permitting or other reasons. Because the process of amending a TIP—for example, adding a new project—is very time consuming and administratively difficult, this delay can be substantial. Several states we interviewed noted that a modest over-programming of the TIP—for example, by 10 percent—would circumvent this problem by including a short list of “ready to go” projects that could be funded in the event that other, higher-priority TIP projects were delayed. AMPO supported the financial constraint requirement for the long-term (20-year) plan. AASHTO, however, stated that the implementing regulations do not take into account the difficulty of predicting the amounts and sources of funding over a 20-year period. AASHTO noted that the requirement was unrealistic and could prevent MPOs from taking advantage of fiscal partnering arrangements. As a result, AASHTO calls for eliminating the ISTEA requirement to financially constrain long-term plans. In addition, 5 of the 11 states we interviewed opposed the continuation of this requirement. Typically, the states said that it is not possible to develop a reliable estimate of revenues over a 20-year period and that financially constraining the long-term plan inhibits a vision for the regional transportation system. AMPO and AASHTO’s are perhaps in clearest disagreement over the issue of the MPOs’ authority to select projects. AMPO favors extending decision-making authority to all of the MPOs that desire to assume it. Potentially, this action would increase from 129 to 339 the number of MPOs with the authority to select projects. AASHTO’s proposal to raise the threshold for the transportation management area to 1 million people would take the authority to select projects away from about 94 MPOs that currently have it. AASHTO contends that raising the threshold would restrict the authority to those urbanized areas likely to have the resources to meet the burdens this authority implies. AASHTO’s position on this issue was not well reflected in our interviews—only 2 of the 11 state officials we contacted opposed the retention of ISTEA’s current wording. Not surprisingly, these two states are the ones where we encountered a significant disagreement between the state and the MPO on the question of selection authority. The desirability of ensuring adequate involvement by the public and financial constraints on transportation programs was not disputed by the MPOs and states we interviewed, nor by AASHTO and AMPO. Furthermore, the difficulties of financially constraining long-term plans is clearly a challenge that some states and MPOs have met. In view of the benefits of these provisions, the problems faced in meeting these requirements may not require legislative changes. The key dispute we encountered among the three issues we explored—the delegation of the authority to select projects to a greater or lesser number of metropolitan planning organizations—is essentially an issue to be resolved through congressional deliberations. To ensure that urban transportation plans and programs are an outgrowth of the planning process that ISTEA prescribes, ISTEA required the Secretary of Transportation to conduct planning certification reviews at the MPOs in transportation management areas. The MPO and state officials we spoke to generally supported the certification process and described it as helpful and constructive. However, in reviewing 55 certification reports, we found that the reports are of limited usefulness in assessing trends or problem areas in the ISTEA planning process. First, the certification reports vary widely in format and content because the Department did not develop standard formats for assessing or reporting the MPOs’ compliance. Second, three MPOs were certified despite significant deficiencies in the urban transportation planning process. Accordingly, the results of the certification reviews cannot be used to develop a reliable understanding of the MPOs’ progress in meeting ISTEA’s planning requirements. This is an especially critical issue because the certification reviews are by far the most in-depth assessments of the MPOs’ performance in transportation planning. ISTEA requires that the Secretary of Transportation certify that metropolitan transportation planning conforms with ISTEA’s planning provisions. Specifically, at least once every 3 years, FHWA and FTA must jointly review and evaluate the planning processes for each of the nation’s 129 MPOs located in TMAs. If, on the basis of their joint review, FHWA and FTA determine that the planning process meets or substantially meets the planning requirements, they may either jointly certify the planning process or conditionally certify the process subject to specified corrective actions. If FHWA and FTA find that the planning process in a TMA does not meet the requirements, certification is denied, and FHWA and FTA may withhold all or part of the apportioned federal highway and transit funds, or withhold their approval of certain projects. This requirement was a significant change in federal oversight policy. Since 1983, the urban transportation planning regulations have required that the state and the MPO “self-certify” that the urban transportation planning process is in conformance with the continuing, cooperative, and comprehensive (3-C) process called for in the law and the regulations. Self-certification was intended to grant increased responsibility for transportation planning to the states and MPOs. Under ISTEA, the MPOs and the states will continue to self-certify annually. The FHWA and FTA certification reviews are comprehensive. First, they cover all 129 TMAs with the results of the reviews reflective of large urban areas. Second, the reviews cover a range of planning topics focusing on six areas: incorporation of the 15 planning factors in the planning process, development of early and continuing involvement by the public, completion of detailed alternative studies when considering major transportation investments in a corridor, development of a congestion management system incorporating measures to reduce travel demand, assurance that plans and programs conform with air quality plans and the Clean Air Act Amendments of 1990, and development of financial constraints on plans and programs. Certification reviews consist of a desk audit, during which FHWA and FTA staff review pertinent files and supporting documentation pertaining to the planning process; a site visit that includes extensive meetings with members of the MPO’s governing board and technical staff, state transportation officials, and other local officials; a public meeting to allow members of the general public to share their impressions of the planning process; and the preparation of a report on the certification review. The on-site reviews can last 5 days and include eight or more representatives of FHWA and FTA staff from headquarters, the regions, and field offices. In commenting on a draft of this report, DOT officials stated that although the certification reviews are the formal mechanism for ensuring compliance, DOT uses a number of other means as well. For example, DOT reviews and approves planning work programs for all metropolitan areas, assesses the TIP and TIP amendments for conformity with that state’s air quality plan in areas not meeting federal air quality standards, and reviews and approves state TIPs. DOT is also conducting a series of enhanced planning reviews (EPR) in a much more limited number of urban areas. According to an official of DOT’s Volpe Transportation Center, the EPRs are intended to be less judgmental and regulatory oriented than the certification reviews. The MPOs and the states have differing views on the certification review process. The MPOs and states we interviewed generally see the process as constructive and helpful and support its continuation. However, some also noted that the reviews could be done more efficiently and the results reported in a more timely manner. AASHTO has called for the elimination of the certification reviews because they are time consuming. Five of the 12 large MPOs we interviewed had been certified as of May 1996. Each of these MPOs told us that the certification review was constructive and helpful and stated that the requirement for certification by DOT should be continued. For example, the representatives of the Milwaukee MPO said that the process was constructive and that it would be unwise for the federal government to dole out money with no accountability for compliance with the federal planning guidelines. Also, the certification review provides local elected officials and MPO staff the opportunity to meet with federal officials and get a better feel for what is expected, as well as useful critiques of how the MPO staff approach their job. The Springfield, Massachusetts, MPO staff told us that FHWA and FTA reviewers helped begin the movement toward greater regional control of the MPO. For example, the certification review began a dialogue on the need to give regional officials greater representation on the MPO’s board. On the other hand, one MPO noted that the on-site reviews could be completed in less time. For example, the planning staff of the Pensacola MPO said that the on-site visit took almost a full week and could have been done in a day and a half. Attributing the length of the visit to the fact that it was a first-time effort, they said that the visits would likely be briefer in subsequent reviews. Officials from 8 of 11 states we contacted had experiences with the process of MPO certification reviews. Four of them supported the continuation of the process, one opposed continuation, and two were neutral or had no opinion. While most of these state officials supported the process, several noted that DOT should emphasize a constructive process rather than a fault-finding audit approach. A Texas official noted that the reviews, in contrast to the practice of self-certification, give the planners an objective assessment of their performance. AASHTO advocates eliminating the certification reviews. It asserted that the reviews are too time consuming and cumbersome for many states and do little to improve the planning process. As of January 12, 1996, DOT had issued certification reports on 55 MPOs. Twenty-three MPOs were certified without qualification, and 31 were certified subject to certain corrective actions being taken. To date, one MPO has not been certified—the MPO for the Boston metropolitan area; its certification was held in abeyance. The overriding issue in this case was the insufficient role that local elected officials had played in the planning process. For example, in meetings between FHWA and FTA staff and 12 local elected officials, the local officials unanimously complained that they had virtually no opportunity to be part of the decision-making process. While Boston was the sole instance in which DOT postponed certification of the planning process, our review of the reports on certification reviews indicate that conditional certifications were issued for some MPOs in serious noncompliance with ISTEA’s planning requirements. For example, the reports on other Massachusetts MPOs noted insufficient local representation and state dominance of the planning process. The Worcester, Massachusetts, MPO was certified even though it had no local officials on its policy body, the MPO’s technical board had not met publicly since 1976, no public involvement process had been formally adopted, and TIPs and transportation plans were not appropriately financially constrained. In addition, although the Springfield, Massachusetts, MPO’s policy body had not met in 14 years and included no local elected officials, the MPO was certified. Numerous instances of noncompliance were also identified in the report for the Louisville, Kentucky, MPO. The over-arching issue was a lack of communication and cooperation among the key regional planning entities. The states of Kentucky and Indiana, as well as the city of Louisville, were carrying out many planning activities outside of the MPO process, prompting the reviewers to state that they found parochialism far more prevalent than regionalism. FHWA’s review noted that the entities in the urbanized area were more concerned with getting their “piece of the pie” than with the good of the region. As a result of these concerns, the reviewers recommended that the MPO be conditionally certified for 1 year. DOT certified these MPOs because of its flexible approach in the first round of reviews. According to an FHWA headquarters official, the current round of reviews began 3 years after ISTEA’s passage but only a year after the final planning regulations were issued. As a result, DOT felt that a phase-in of requirements and a lenient approach in the first round of reviews were appropriate. This was particularly true during the pilot reviews, which included the reviews of Worcester and Louisville. Decertification, the official said, would have occurred only in the case of egregious noncompliance, such as the failure to submit a TIP. Because the certifications must be completed every 3 years, FHWA and FTA regional and divisional offices are devoting considerable resources to the certification reviews. For example, officials in FHWA’s Region 4 estimated that FHWA and FTA had spent a total of 1,105 staff days in conducting and reporting the results of 19 certification reviews within their region, averaging 58 staff days per review. In addition, FHWA and FTA personnel in two other regions we contacted spent 420 staff days and 408 staff days, respectively, completing the certification reviews in their own jurisdictions over the same period. This accounting does not include the travel and per diem costs involved in the reviews. A certification review can last 5 days and include 8 or more representatives from FTA and FHWA headquarters and regional and field offices. Despite this large resource commitment, in our review of the 55 reports on certification reviews published through January 12, 1996, we found that the reports on certification reviews were not documented in a way that allows comparisons between one MPO and another, or a meaningful assessment of the progress that the MPOs are making in meeting the planning requirements. The reports vary significantly in format, depth, and content. In one FHWA region, for example, all six of the reports on certification reviews that we examined were four pages or less in length, were written in a very summary fashion, and contained limited discussions of how the MPOs complied with the six focal areas under review. By contrast, the certification reports from several other FHWA regions were quite lengthy, as long as 29 pages and averaging over 15 pages. As a result, a national overview of the MPOs’ progress in meeting the planning requirements would be quite difficult to develop. Variations also exist in the use of the key terms of certification reviews, such as “corrective action required” or “corrective action recommended.” For example, one region’s reports clearly distinguish corrective actions as areas where steps are needed to correct a regulatory deficiency from those which are optional recommendations for improvement. In some certification reports from other regions, however, it was not possible to distinguish corrective actions from recommendations. For example, the cover letter of one report stated that the MPO was certified subject to certain corrective actions. However, the body of the report did not name the corrective actions that the MPO was to undertake. Instead, it included a discussion of 11 recommendations, although it was not clear if these recommendations were required for certification or whether they were left to the discretion of the MPO. According to FHWA headquarters officials, the certification reviews were not intended to help assess a trend toward improvements in metropolitan transportation planning efforts. Instead, the purpose was to assess whether an individual MPO had substantially complied with the planning requirements. Furthermore, DOT wanted to avoid a defined format, so as to give certifying officials the flexibility to conduct the reviews in a way best suited to the MPO and its unique circumstances. Also, DOT wanted to encourage innovation and experimentation in conducting the reviews. Although DOT provided its certification reviewers with the flexibility to assess the MPOs’ compliance with ISTEA planning requirements, the result of this flexibility has been that the certification reports provide limited information on how well MPOs have met these important ISTEA provisions. For example, the certification reports do not allow the Department to determine if the difficulties faced in financially constraining TIPs were similar across most MPOs, or whether these reasons had similar root causes. Given the resources going into the effort and the resultant depth of the reviews, collecting consistent data for an overall assessment is important and would not preclude the Department’s need for flexibility. Collecting these data is further justified since the certification reviews are by far the most comprehensive reviews of the MPOs’ performance that are likely to be conducted. We recommend that the Secretary of Transportation direct the Administrators of the Federal Highway Administration and the Federal Transit Administration to develop reporting formats for assessing and reporting on the MPOs’ compliance with ISTEA’s planning requirements in such a way that the Department can identity any nationwide patterns in planning deficiencies, the underlying causes of these planning deficiencies, and the extent to which the MPOs have made progress in meeting the requirements. DOT officials disagreed with our conclusion that the information gathered during the certification reviews should be used to develop an overview of the MPOs’ progress in meeting ISTEA’s planning requirements. DOT officials stated that the certification reviews were not intended to assess the MPOs’ overall progress; rather, they were intended to review the efforts of individual MPOs and provide those MPOs feedback on what they must do to fully meet ISTEA’s planning requirements. In addition, officials stated that the certification process is one of several activities that the Department has or plans to take to determine the MPOs’ compliance with the planning requirements and thereby assess the MPOs’ overall progress in meeting the requirements. These additional activities include the Department’s approval of TIPs and their conformity with state air quality plans; the sponsorship of studies, focus groups, and conferences on the MPOs’ progress; and the use of enhanced planning reviews. The Department will use this body of information to assess the MPOs’ compliance with the planning requirements and thereby provide the Congress with information on whether the MPO planning provisions should be continued in ISTEA’s successor legislation. As a result of these concerns, DOT officials disagreed with the recommendation in our draft report that it develop standard criteria and reporting formats for its certification reviews so that the Department could assess and report on the MPOs’ compliance with ISTEA’s planning requirements. DOT officials stated that the recommendation was too prescriptive, particularly in its call for standard criteria, and suggested that we direct our recommendation to the Congress instead. We have incorporated information in the report that describes the additional activities that Department officials stated they have undertaken or plan to undertake to assess the MPOs’ progress in meeting ISTEA’s planning requirements. In addition, we have modified our proposed recommendation by deleting our original call for standard criteria to address the Department’s request for more flexibility in responding to our recommendation. However, we disagree with the Department’s characterization of the certification reviews as only one element in a broader effort to assess the MPOs’ compliance and progress. The scope and effort that the Department has placed in the certification reviews clearly suggest that the information obtained through the reviews is critical in assessing how well the MPOs have met the requirements. The certification reviews cover all 129 MPOs in the nation’s largest urban areas, assess the MPOs’ progress in six key planning areas, and require significant FHWA and FTA headquarters and regional staff time to complete. In contrast, the enhanced planning reviews as well as DOT-sponsored studies have reviewed only a small number of MPOs. Given this investment, we believe it is appropriate for the Department to develop standard formats for documenting the results of the certification reviews. A standard reporting format would not limit the Department’s flexibility to tailor the certification reviews to the particular needs of the MPO. Rather, it would provide the Department and the Congress with rich sources of information that they could use to evaluate whether or not the MPO planning provisions should be continued. DOT officials also suggested technical and editorial changes to the report. Where appropriate, we incorporated these changes.
Pursuant to a congressional request, GAO reviewed: (1) metropolitan planning organizations' (MPO) implementation of the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) planning requirements; and (2) whether the Department of Transportation's certification review process ensures that MPO in larger urban areas comply with those planning requirements. GAO found that: (1) the MPOs have found three of ISTEA's planning requirements particularly challenging to meet: (a) requiring greater involvement by citizens; (b) limiting short- and long-term transportation plans to reasonable revenue projections (the financial constraint requirement); and (c) selecting transportation projects; (2) the MPOs found that the requirement to involve citizens had ensured that their transportation plans better reflected their regions' transportation needs; (3) the financial constraint requirement led the MPOs to obtain more reliable revenue projections from the state departments of transportation and transit agencies and to exclude those projects that could not be financed within budget constraints; (4) ISTEA's project selection authority required the MPOs to become consensus builders, effectively working with the states, localities, and transit agencies in identifying projects; (5) in some cases, the efforts of the MPOs and the local officials to assume greater authority have encountered resistance from the states; (6) despite the difficulties encountered, the MPOs that GAO interviewed said that their efforts to meet these three planning requirements had improved their transportation plans; (7) the 13 MPOs that GAO interviewed unanimously endorsed the continuation of the ISTEA planning requirements; (8) in contrast, state department of transportation officials that GAO interviewed did not uniformly support the continuation of ISTEA's planning requirements; (9) as of January 1996, the Federal Highway Administration (FHwA) and the Federal Transit Administration (FTA) had reviewed 55 MPOs; (10) 23 were certified without qualification, and 31 were certified subject to certain corrective actions being taken; (11) the certification of one MPO was held in abeyance because of significant areas of noncompliance; (12) in reviewing 55 certification reports, GAO found that the reports are of limited usefulness in assessing trends or problem areas in the ISTEA planning process; (13) the certification reports vary widely in format and content because the Department did not develop standard criteria for assessing or reporting the MPOs' compliance; and (14) three MPOs were conditionally certified despite significant deficiencies in their urban transportation planning processes.
As we reported in our February 2014 report, since CSA was implemented nationwide in 2010, it has been successful in raising the profile of safety in the motor carrier industry and providing FMCSA with more tools to increase interventions with carriers. We found that following the implementation of CSA, FMCSA was potentially able to reach a larger number of carriers, primarily by sending them warning letters. Law enforcement officials and industry stakeholders we interviewed generally supported the structure of the CSA program, in part because CSA provides data about the safety record of individual carriers, such as data on inspections, violations, crashes, and investigations, that help guide the work of state inspectors during inspections. However, despite these advantages, our report also uncovered major challenges in reliably assessing safety risk and targeting the riskiest carriers. First, according to FMCSA, SMS was designed to use all safety-related violations of FMCSA regulations recorded during roadside inspections. For SMS to be effective in identifying carriers at risk of crashing, the violation information that is used to calculate SMS scores should have a relationship with crash risk. However, we found that the relationship between the violation of most of these regulations and crash risk is unclear, potentially limiting the effectiveness of SMS in identifying carriers that are likely to crash. Our analysis found that most of the safety regulations used in SMS were violated too infrequently over a 2-year period to reliably assess whether they were accurate predictors of an individual carrier’s likelihood to crash. Specifically, we found that 593 of the 754 regulations we examined were violated by less than one percent of carriers. Of the remaining regulations with sufficient violation data, we found 13 regulations for which violations consistently had some association with crash risk in at least half the tests we performed, and only two regulations had sufficient data to consistently establish a substantial and statistically reliable relationship with crash risk across all of our tests. Second, most carriers lack sufficient safety performance data, such as information from inspections, to ensure that FMCSA can reliably compare them with other carriers. SMS scores are based on violation rates that are calculated by dividing a carrier’s violations by either the number of inspections or vehicles associated with a carrier. The precision and reliability of these rates varies greatly depending on the number of inspections or vehicles a carrier has. Violation rates calculated for carriers with more inspections or vehicles will have more precision and confidence than those with only a few inspections or vehicles. This statistical reality is critical to SMS, because for the majority of the industry, the number of inspections or vehicles for an individual carrier is very low. About two- thirds of carriers we evaluated operated fewer than four vehicles and more than 93 percent operated fewer than 20 vehicles. Moreover, many of these carriers’ vehicles were inspected infrequently. Carriers with few inspections or vehicles will potentially have estimated violation rates that are artificially high or low and thus not sufficiently precise for comparison across carriers. This creates the likelihood that many SMS scores do not accurately or precisely assess safety for a specific carrier. FMCSA acknowledged that violation rates for carriers with few inspections or vehicles can be less precise, but the methods FMCSA uses to address this limitation are not effective. For example, FMCSA requires a minimum level of data (i.e., inspections or violations) for a carrier to receive an SMS score. However, we found that level of data is not sufficient to ensure reliable results. Our analysis of the effectiveness of FMCSA’s existing CSA methodology found that the majority of the carriers that SMS identified as having the highest risk for crashing in the future did not actually crash. Moreover, smaller carriers and carriers with few inspections or vehicles tended to be disproportionately targeted for intervention. As a result, FMCSA may devote intervention resources to carriers that do not necessarily pose as great a safety risk as other carriers. In our 2014 report, we illustrated that when SMS only considered carriers with more safety information, such as inspections, it was better able to identify carriers that later crashed and allowed for better targeting of resources. An approach like this would involve trade-offs; fewer carriers would receive SMS scores, but these scores would generally be more reliable for targeting FMCSA’s intervention resources. FMCSA could still use the safety information available to oversee the remaining carriers the same way it currently oversees the approximately 72 percent of carriers that do not receive SMS scores using its existing approach. Given the limitations of safety performance information, we concluded that it is important that FMCSA consider how reliable and precise SMS scores need to be for the purposes for which they are used. FMCSA reports these scores publicly and is considering using a carrier’s performance information to determine its fitness to operate. FMCSA includes a disclaimer with the publicly released SMS scores, which states that the data are intended for agency and law enforcement purposes, and that readers should draw conclusions about a carrier’s safety condition based on the carrier’s official safety rating rather than its SMS score. At the same time, FMCSA has also stated that SMS provides stakeholders with valuable safety information, which can “empower motor carriers and other stakeholders…to make safety-based business decisions.” As a result, some stakeholders we spoke to, such as industry and law enforcement groups, have said that there is a lot of confusion in the industry about what the SMS scores mean and that the public, unlike law enforcement, may not understand the limitations of the system. Based on the concerns listed above, in our 2014 report we recommended that FMCSA revise the SMS methodology to better account for limitations in available information when drawing comparisons of safety performance across carriers. We further recommended that FMCSA’s determination of a carrier’s fitness to operate should account for limitations we identified regarding safety performance information. FMCSA did not concur with our recommendation to revise the SMS methodology because, according to FMCSA officials, SMS in its current state sufficiently prioritizes carriers for intervention purposes. However, FMCSA agreed with our recommendation on the determination of a carrier’s fitness to operate, but has not yet taken any actions. As I will discuss later in my statement, we continue to believe that FMCSA should improve its SMS methodology. As we reported in our March 2012 report, FMCSA also faces significant challenges in determining the prevalence of chameleon carriers, in part, because there are approximately 75,000 new applicants each year. As mentioned earlier, chameleon carriers are motor carriers disguising their former identity to evade enforcement actions. FMCSA has established a vetting program to review each new application for operating authority submitted by passenger carriers (intercity and charter or tour bus operators) and household goods carriers (hired by consumers to move personal property). According to FMCSA officials, FMCSA vetted all applicants in these groups for two reasons: (1) these two groups pose higher safety and consumer protection concerns than other carrier groups and (2) it does not have the resources to vet all new carriers. While FMCSA’s exclusive focus on passenger and household goods carriers limits the vetting program to a manageable number, it does not account for the risk presented by chameleon carriers in the other groups, such as for-hire freight carriers, that made up 98 percent of new applicants in 2010. We found that using data analysis to target new applicants would allow FMCSA to expand its examinations of newly registered carriers to include new applicants of all types using few or no additional staff resources. Our analysis of FMCSA data found that 1,136 new motor carrier applicants in 2010 had chameleon attributes, of which 1,082 were freight carriers.Even with the large number of new applicant carriers and constraints on its resources, we concluded in 2012 that FMCSA could target the carriers that present the highest risk of becoming chameleon carriers by using a data-driven, risk-based approach. As a result of these findings, we recommended that FMCSA use a data- driven, risk-based approach to target carriers at high risk for becoming chameleon carriers. This would allow expansion of the vetting program to all carriers with chameleon attributes, including freight carriers. FMCSA agreed with our recommendations. In June 2013, to help better identify chameleon carriers, FMCSA developed and began testing a risk-based methodology that implemented a framework that closely follows the methodology we discussed in our report. FMCSA’s preliminary analysis of this methodology indicates that it is generally successful in providing a risk-based screening of new applicants, which it plans to use as a front- end screening methodology for all carrier types seeking operating authority. By developing this risk-based methodology and analyzing the initial results, FMCSA has developed an approach that may help keep unsafe carriers off the road. To further help Congress with its oversight of FMCSA and motor carrier safety, we also have on-going work on FMCSA’s hours-of-service regulations, DOD’s Transportation Protective Services program, and commercial driver’s licenses. This work is in various stages, and we expect to issue the final reports later this year. In conclusion, the commercial motor carrier industry is large and dynamic, and FMCSA plays an important role in identifying and removing unsafe carriers from the roadways. With over 500,000 active motor carriers, it is essential to examine ways to better target FMCSA’s resources to motor carriers presenting the greatest risk. To effectively do this, FMCSA must use a number of strategies to identify and intervene with high risk carriers. We continue to believe that a data-driven, risk-based approach for identifying high risk carriers holds promise. FMCSA’s preliminary steps to implement a risk-based screening methodology have the potential to identify more high risk chameleon carriers. However, without efforts to revise its SMS methodology, FMCSA will not be able to effectively target its intervention resources toward the highest risk carriers and will be challenged to meet its mission of reducing the overall crashes, injuries, and fatalities involving large trucks and buses. Chairwoman Fischer, Ranking Member Booker, and Members of the Subcommittee, this concludes my prepared remarks. I would be pleased to answer any questions you or other Members may have at this time. For further information regarding this statement, please contact Susan Fleming at (202) 512-2834 or Flemings@gao.gov about this statement. Contact points for our Offices of Congressional Relations and Public Relations can be found on the last page of this statement. Matt Cook, Jen DuBord, Sarah Farkas, Brandon Haller, Matt LaTour, and Amy Rosewarne made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
FMCSA's primary mission of reducing crashes, injuries, and fatalities involving large trucks and buses is critical to the safety of our nation's highways. However, with more than 500,000 active motor carriers operating on U.S. roadways, FMCSA must screen, identify, and target its resources toward those carriers presenting the greatest risk for crashing in the future. FMCSA has recently taken some steps in this direction by, among other actions: Establishing its oversight program—the CSA program—based on a data-driven approach for identifying motor carriers at risk of presenting a safety hazard or causing a crash, and Establishing a vetting program designed to detect potential “chameleon” carriers—those carriers that have deliberately disguised their identity to evade enforcement actions issued against them. This testimony provides information on both of these programs, based on two recent GAO reports on the oversight challenges FMCSA faces in identifying high risk motor carriers for intervention ( GAO-14-114 ), and chameleon carriers ( GAO-12-364 ), respectively. The Federal Motor Carrier Safety Administration (FMCSA) has taken steps toward better oversight of motor carriers by establishing the Compliance, Safety, Accountability (CSA) and chameleon carrier vetting programs; however, FMCSA could improve its oversight to better target high risk carriers. The CSA program oversees carriers' safety performance through roadside inspections and crash investigations, and issues violations when instances of noncompliance with safety regulations are found. CSA provides FMCSA, state safety authorities, and the industry with valuable information regarding carriers' performance on the road. A key component of CSA—the Safety Measurement System (SMS)—uses carrier performance data collected from inspections and investigations to calculate safety scores for carriers and identify those at high risk of causing a crash. The program then uses these scores to target high risk carriers for enforcement actions, such as warning letters, additional investigations, or fines. However, GAO's 2014 report identified two major challenges that limit the precision of the SMS scores and confidence that these scores are effectively comparing safety performance across carriers. First, SMS uses violations of safety-related regulations to calculate a score, but GAO found that most of these regulations were violated too infrequently to determine whether they were accurate predictors of crash risk. Second, most carriers lacked sufficient data from inspections and violations to ensure that a carrier's SMS score could be reliably compared with scores for other carriers. GAO concluded that these challenges raise questions about whether FMCSA is able to identify and target the carriers at highest risk for crashing in the future. To address these challenges, GAO recommended, among other things, that FMCSA revise the SMS methodology to better account for limitations in available information when drawing comparisons of safety performance across carriers. FMCSA did not concur with GAO's recommendation to revise the SMS methodology because it believed that SMS sufficiently prioritized carriers for intervention. Therefore, FMCSA has not taken any actions. GAO continues to believe that a data-driven, risk-based approach holds promise, and efforts to improve FMCSA's oversight could allow it to more effectively target its resources toward the highest risk carriers, and better meet its mission of reducing the overall crashes, injuries, and fatalities involving motor carriers. GAO's 2012 report found that FMCSA examined only passenger and household goods carriers as part of its chameleon carrier vetting program for new applicants. GAO found that by modifying FMCSA's vetting program, FMCSA could expand its examinations of newly registered carriers to include all types of carriers, including freight carriers, using few additional staff resources. GAO recommended that FMCSA develop, implement, and evaluate the effectiveness of a data-driven, risk-based vetting methodology to target carriers with chameleon attributes. FMCSA concurred with GAO's recommendation and has taken actions to address these recommendations.
Medicare is generally the primary source of health insurance for people age 65 and over. However, traditional Medicare leaves beneficiaries liable for considerable out-of-pocket costs, and most beneficiaries have supplemental coverage. Military retirees can also obtain some care from MTFs and, since October 1, 2001, DOD has provided comprehensive supplemental coverage to its retirees age 65 and over. Civilian federal retirees and dependents age 65 and over can obtain supplemental coverage from FEHBP. The demonstration tested extending this coverage to military retirees age 65 and over, and their dependents. Medicare, a federally financed health insurance program for persons age 65 and older, some people with disabilities, and people with end-stage kidney disease, is typically the primary source of health insurance for persons age 65 and over. Eligible Medicare beneficiaries are automatically covered by part A, which includes inpatient hospital and hospice care, most skilled nursing facility (SNF) care, and some home health care. They can also pay a monthly premium ($54 in 2002) to join part B, which covers physician and outpatient services as well as those home health services not covered under part A. Outpatient prescription drugs are generally not covered. Under traditional fee-for-service Medicare, beneficiaries choose their own providers and Medicare reimburses those providers on a fee-for- service basis. Beneficiaries who receive care through traditional Medicare are responsible for paying a share of the costs for most services. The alternative to traditional Medicare, Medicare+Choice, offers beneficiaries the option of enrolling in private managed care plans and other private health plans. In 1999, before the demonstration started, about 16 percent of all Medicare beneficiaries were enrolled in a Medicare+Choice plan; by 2002, the final year of the demonstration, enrollment had fallen to 12 percent. Medicare+Choice plans cover all basic Medicare benefits, and many also offer additional benefits such as prescription drugs, although most plans place a limit on the amount of drug costs they cover. These plans typically do not pay if their members use providers who are not in their plans, and plan members may have to obtain approval from their primary care doctors before they see specialists. Members of Medicare+Choice plans generally pay less out of pocket than they would under traditional Medicare. Medicare’s traditional fee-for-service benefit package and cost-sharing requirements leave beneficiaries liable for significant out-of-pocket costs, and most beneficiaries in traditional fee-for-service Medicare have supplemental coverage. This coverage typically pays part of Medicare’s deductibles, coinsurance, and copayments, and may also provide benefits that Medicare does not cover—notably, outpatient prescription drugs. Major sources of supplemental coverage include employer-sponsored insurance, the standard Medigap policies sold by private insurers to individuals, and Medicaid. Employer-sponsored insurance. About one-third of Medicare’s beneficiaries have employer-sponsored supplemental coverage. These plans, which typically have cost-sharing requirements, pay for some costs not covered by Medicare, including part of the cost of prescription drugs. Medigap. About one-quarter of Medicare’s beneficiaries have Medigap, the only supplemental coverage option available to all beneficiaries when they initially enroll in Medicare. Prior to 1992, insurers were free to establish the benefits for Medigap policies. The Omnibus Budget Reconciliation Act of 1990 (OBRA 1990) required that beginning in 1992, Medigap policies be standardized, and OBRA authorized 10 different benefit packages, known as plans A through J, that insurers could offer. The most popular Medigap policy is plan F, which covers Medicare coinsurance and deductibles, but not prescription drugs. It had an average annual premium per person of about $1,200 in 1999, although in some cases plan F cost twice that amount. Among the least popular Medigap policies are those offering prescription drug coverage. These policies are the most expensive of the 10 standard policies—they averaged about $1,600 in 1999, and some cost over $5,000. Beneficiaries with these policies pay most of the cost of drugs because the Medigap drug benefit has a deductible and high cost sharing and does not reimburse policyholders for drug expenses above a set limit. DOD provides health care to active-duty military personnel and retirees, and to eligible dependents and survivors through its TRICARE program. Prior to 2001, retirees lost most of their military health coverage when they turned age 65, although they could still use MTFs when space was available, and they could obtain prescription drugs without charge from MTF pharmacies. In the Floyd D. Spence National Defense Authorization Act for Fiscal Year 2001 (NDAA 2001), Congress established two new benefits to supplement military retirees’ Medicare coverage: Pharmacy benefit. Effective April 1, 2001, military retirees age 65 and over were given access to prescription drugs through TRICARE’s National Mail Order Pharmacy (NMOP) and civilian pharmacies. Retirees make lower copayments for prescription drugs purchased through NMOP than at civilian pharmacies. Retirees continue to have access to free prescription drugs at MTF pharmacies. TFL. Effective October 1, 2001, military retirees age 65 and over who were enrolled in Medicare part B became eligible for TFL. As a result, DOD is now a secondary payer for these retirees’ Medicare-covered services, paying all of their required cost sharing. TFL also offers certain benefits not covered by Medicare, including catastrophic coverage. Retirees can continue to use MTFs without charge on a “space available” basis. In fiscal year 1999, before TFL was established, DOD’s annual appropriations for health care were about $16 billion, of which over $1 billion funded the care of military retirees age 65 and over. In fiscal year 2002, DOD’s annual health care appropriations totaled about $24 billion, of which over $5 billion funded the care of retirees age 65 and over who used TFL, the pharmacy benefit, and MTF care. In addition to their DOD coverage, military retirees—but generally not their dependents—can use Department of Veterans Affairs (VA) facilities. There are 163 VA medical centers throughout the country that provide inpatient and outpatient care as well as over 850 outpatient clinics. VA care is free to veterans with certain service-connected disabilities or low incomes; other veterans are eligible for care but have lower priority than those with service-connected disabilities or low incomes and are required to make copayments. FEHBP, the health insurance program administered by OPM for federal civilian employees and retirees, covered about 8.3 million people in 2002. Civilian employees become eligible for FEHBP when hired by the federal government. Employees and retirees can purchase health insurance from a variety of private plans, including both managed care and fee-for-service plans, that offer a broad range of benefits, including prescription drugs. Insurers offer both self-only plans and family plans, which also cover the policyholders’ dependents. Some plans also offer two levels of benefits: a standard option and a high option, which has more benefits, less cost sharing, or both. For retirees age 65 and over, FEHBP supplements Medicare, paying beneficiaries’ Medicare deductibles and coinsurance in addition to paying some costs not covered by Medicare, such as part of the cost of prescription drugs. Over two-thirds of FEHBP policyholders are in national plans; the remainder are in local plans. National plans include plans that are available to all civilian employees and retirees as well as plans that are available only to particular groups, for example, foreign service employees. In the FEHBP, the largest national plan is Blue Cross Blue Shield, accounting for about 45 percent of those insured by an FEHBP plan. Other national plans account for about 24 percent of insured individuals. The national plans are all preferred provider organizations (PPO) in which enrollees use doctors, hospitals, and other providers that belong to the plan’s network, but are allowed to use providers outside of the network for an additional cost. Local plans, which operate in selected geographic areas and are mostly managed care, cover the remaining 32 percent of people insured by the FEHBP. Civilian employees who enroll in FEHBP can change plans during an annual enrollment period. During this period, which runs from mid- November to mid-December, beneficiaries eligible for FEHBP can select new plans for the forthcoming calendar year. To assist these beneficiaries in selecting plans, OPM provides general information on FEHBP through brochures and its Web site. Also, as part of this information campaign, plans’ representatives may visit government agencies to participate in health fairs, where they provide detailed information about their specific health plans to government employees. The premiums charged by these plans, which are negotiated annually between OPM and the plans, depend on the benefits offered by the plan, the type of plan—fee-for-service or managed care—and the plan’s out-of- pocket costs for the enrollee. Plans may propose changes to benefits as well as changes in out-of-pocket payments by enrollees. OPM and the plans negotiate these changes and take them into account when negotiating premiums. Fee-for-service plans must base their rates on the claims experience of their FEHBP enrollees, while adjusting for changes in benefits and out-of-pocket payments, and must provide OPM with data to justify their proposed rates. Managed care plans must give FEHBP the best rate that they offer to groups of similar size in the private sector under similar conditions, with adjustments to account for differences in the demographic characteristics of FEHBP enrollees and the benefits provided. The government pays a maximum of 72 percent of the weighted average premium of all plans and no more than 75 percent of any plan’s premium. Unlike most other plans, including employer-sponsored insurance and Medigap, FEHBP plans charge the same premium to all enrollees, regardless of age. As a result, persons over age 65, for whom the FEHBP plan supplements Medicare, pay the same rate as those under age 65, for whom the FEHBP plan is the primary insurer. The FEHBP demonstration allowed eligible beneficiaries in the demonstration sites to enroll in an FEHBP plan. The demonstration ran for 3 years, from January 1, 2000, through December 31, 2002. The law that established the demonstration capped enrollment at 66,000 beneficiaries and specified that DOD and OPM should jointly select from 6 to 10 sites. Initially, the agencies selected 8 sites that had about 69,000 eligible beneficiaries according to DOD’s calculation for 2000. (See table 1.) Four sites had MTFs, and 1 site—Dover—also participated in the subvention demonstration. Two other sites, which had about 57,000 eligible beneficiaries, were added in 2001. Demonstration enrollees received the same benefits as civilian FEHBP enrollees, but could no longer use MTFs or MTF pharmacies. Military retirees age 65 and over and their dependents age 65 and over were permitted to enroll in either self-only or family FEHBP plans. Dependents who were under age 65 could be covered only if the eligible retiree chose a family plan. Several other groups were permitted to enroll including: unremarried former spouses of a member or former member of the armed forces entitled to military retiree health care, dependents of a deceased member or former member of the armed forces entitled to military retiree health care, and dependents of a member of the armed services who died while on active duty for more than 30 days. About 13 percent of those eligible for the demonstration were under age 65. DOD, with assistance from OPM, was responsible for providing eligible beneficiaries information on the demonstration. A description of this information campaign is in appendix IV. The demonstration guaranteed enrollees who dropped their Medigap policies the right to resume their coverage under 4 of the 10 standard Medigap policies—plans A, B, C, and F—at the end of the demonstration. However, demonstration enrollees who held any other standard Medigap policies, or Medigap policies obtained before the standard plans were established, were not given the right to regain the policies. Enrollees who dropped their employer-sponsored retiree health coverage had no guarantee that they could regain it. Each plan was required by OPM to offer the same package of benefits to demonstration enrollees that it offered in the civilian FEHBP, and plans operating in the demonstration sites were generally required to participate in the demonstration. Fee-for-service plans that limit enrollment to specific groups, such as foreign service employees, did not participate. In addition, health maintenance organizations (HMO) and point-of-service (POS) plans were not required to participate if their civilian FEHBP enrollment was less than 300 or their service area overlapped only a small part of the demonstration site. Thirty-one local plans participated in the demonstration in 2000; for another 14 local plans participation was optional, and none of these participated. The law established a separate risk pool for the demonstration, so any losses from the demonstration were not covered at the expense of persons insured under the civilian FEHBP. As a result, plans had to establish separate reserves for the demonstration and were allowed to charge different premiums in the demonstration than they charged in the civilian program. Enrollment in the demonstration was low, although enrollment in Puerto Rico was substantially higher than on the U.S. mainland. Among eligible beneficiaries who knew about the demonstration yet chose not to enroll, most were satisfied with their existing health care coverage and preferred it to the demonstration’s benefits. Lack of knowledge about the demonstration accounted for only a small part of the low enrollment. Although most eligible retirees did not enroll in a demonstration plan, several factors encouraged enrollment. Some retirees took the view that the demonstration plans’ benefits, notably prescription drug coverage, were better than available alternatives. Other retirees mentioned lack of satisfactory alternative coverage. In particular, retirees who were not covered by an existing Medicare+Choice or employer-sponsored health plan were much more likely to enroll. The higher enrollment in Puerto Rico reflected a higher proportion of retirees there who considered the demonstration’s benefits—ranging from drug coverage to choice of doctors—better than what they had. The higher enrollment in Puerto Rico also reflected in part Puerto Rico’s greater share of retirees without existing coverage, such as an employer-sponsored plan. While some military retiree organizations as well as a large FEHBP plan predicted at the start of the demonstration that enrollment would reach 25 percent or more of eligible beneficiaries, demonstration-wide enrollment was 3.6 percent in 2000 and 5.5 percent in 2001. In 2002, following the introduction of the senior pharmacy benefit and TFL the previous year, demonstration-wide enrollment fell to 3.2 percent. (See fig. 1.) The demonstration’s enrollment peaked at 7,521 beneficiaries, and by 2002 had declined to 4,367 of the 137,230 eligible beneficiaries. These low demonstration-wide enrollment rates masked a sizeable difference in enrollment between the mainland sites and Puerto Rico. In 2000, enrollment in Puerto Rico was 13.2 percent of eligible beneficiaries—about five times the rate on the mainland. By 2001, Puerto Rico’s enrollment had climbed to 28.6 percent. Unlike 2002 enrollment on the mainland, which declined, enrollment in Puerto Rico that year rose slightly, to 30 percent. (See fig. 2.) Among the mainland sites, there were also sizeable differences in enrollment, ranging from 1.3 percent in Dover, Delaware, in 2001, to 8.8 percent in Humboldt County, California, that year. Enrollment at all mainland sites declined in 2002. Retirees who knew about the demonstration and did not enroll cited many reasons for their decision, notably that their existing coverage’s benefits— in particular its prescription drug benefit—and costs were more attractive than those of the demonstration. In addition, nonenrollees expressed several concerns, including uncertainty about whether they could regain their Medicare supplemental coverage after the demonstration ended. Benefits of existing coverage. Almost two-thirds of nonenrollees who knew about the demonstration reported that they were satisfied with their existing employer-sponsored or other health coverage. For the majority of nonenrollees with private employer-sponsored coverage, the demonstration’s benefits were no better than those offered by their current plan. Costs of existing coverage. Nearly 30 percent of nonenrollees who knew about the demonstration stated that its plans were too costly. This was likely a significant concern for retirees interested in a managed care plan, such as a Medicare+Choice plan, whose premiums were generally lower than demonstration plans. Prescription drugs and availability of doctors. In explaining their decision not to enroll, many eligible beneficiaries who knew about the demonstration focused on limitations of specific features of the benefits package that they said were less attractive than similar features of their existing coverage. More than one-quarter of nonenrollees cited not being able to continue getting prescriptions filled without charge at MTF pharmacies if they enrolled. More than one-quarter also said their decision at least partly reflected not being able to keep their current doctors if they enrolled. These nonenrollees may have been considering joining one of the demonstration’s managed care plans, which generally limit the number of doctors included in their provider networks. Otherwise, they would have been able to keep their doctors, because PPOs, while encouraging the use of network doctors, permit individuals to select their own doctors at an additional cost. Uncertainty. About one-fourth of nonenrollees said they were uncertain about the viability of the demonstration and wanted to wait to see how it worked out. In addition, more than 20 percent of nonenrollees were concerned that the demonstration was temporary and would end in 3 years. Furthermore, some nonenrollees who looked beyond the demonstration period expressed uncertainty about what their coverage would be after the demonstration ended: Roughly one-quarter expressed concern that joining a demonstration plan meant risking the future loss of other coverage—either Medigap or employer-sponsored insurance. Finally, about one-quarter of nonenrollees were uncertain about how the demonstration would mesh with Medicare. Lack of knowledge—although common among eligible retirees—was only a small factor in explaining low enrollment. If everyone eligible for the demonstration had known about it, enrollment might have doubled, but would still have been low. DOD undertook an extensive information campaign, intended to inform all eligible beneficiaries about the demonstration, but nearly 54 percent of those eligible for the demonstration did not know about it at the time of our survey (May through August 2000). Of those who knew about the demonstration, only 7.4 percent enrolled. Those who did not know about the demonstration were different in several respects from those who did: They were more likely to be single, female, African American, older than age 75, to have annual income of $40,000 or less, to live an hour or more from an MTF, not covered by employer-sponsored health insurance, not officers, not to belong to military retiree organizations and to live in the demonstration areas of Camp Pendleton, California, Dallas, Texas, and Fort Knox, Kentucky. Accounting for the different characteristics of those retirees who knew about the demonstration and those who did not, we found that roughly 7 percent of those who did not know about the demonstration would have enrolled in 2000 if they had known about it. As a result, we estimate that demonstration-wide enrollment would have been about 7 percent if all eligible retirees knew about the demonstration. (See app. II.) Comparison of enrollment in Puerto Rico and the mainland sites also suggests that, among the factors that led to low enrollment, knowledge about the demonstration was not decisive. In 2000, fewer people in Puerto Rico reported knowing about the demonstration than on the mainland (35 percent versus 47 percent). Nonetheless, enrollment in Puerto Rico was much higher. In making the decision to enroll, retirees were attracted to an FEHBP plan if it had better benefits—particularly prescription drug coverage—or lower costs than their current coverage or other available coverage. Among those who knew about the demonstration, retirees who enrolled were typically positive about one or both of the following: Better FEHBP benefits. Two-thirds of enrollees cited their demonstration plan’s benefits package as a reason to enroll, with just over half saying the benefits package was better than other coverage available to them. Nearly two-thirds of enrollees mentioned the better coverage of prescription drugs offered by their demonstration plan. Furthermore, the inclusiveness of FEHBP plans’ networks of providers mattered to a majority of enrollees: More than three-fifths mentioned as a reason for enrolling that they could keep their current doctors under the demonstration. Lower demonstration plan costs. Among enrollees, about 62 percent said that their demonstration FEHBP plan was less costly than other coverage they could buy. Beneficiaries’ favorable assessments of FEHBP—and their enrollment in the demonstration—were related to whether they lacked alternative coverage to traditional Medicare and, if they had such coverage, to the type of coverage. In 2000, among those who lacked employer-sponsored coverage or a Medicare+Choice plan, or lived more than an hour’s travel time from an MTF, about 15 percent enrolled. By contrast, among those who had such coverage, or had MTF access, 4 percent enrolled. In particular, enrollment in an FEHBP plan was more likely for retirees who lacked either Medicare+Choice or employer-sponsored coverage. Lack of Medicare+Choice. Controlling for other factors affecting enrollment, those who did not use Medicare+Choice were much more likely to enroll in a demonstration plan than those who did. (See fig. 3.) Several reasons may account for this. First, in contrast to fee-for-service Medicare, Medicare+Choice plans are often less costly out-of-pocket, typically requiring no deductibles and lower cost sharing for physician visits and other outpatient services. Second, unlike fee-for-service Medicare, many Medicare+Choice plans offered a prescription drug benefit. Third, while Medicare+Choice plan benefits were similar to those offered by demonstration FEHBP plans, Medicare+Choice premiums were typically less than those charged by the more popular demonstration plans, including Blue Cross Blue Shield, the most popular demonstration plan on the mainland. Lack of employer-sponsored coverage. Retirees who did not have employer-sponsored health coverage were also more likely to join a demonstration plan. Of those who did not have employer-sponsored coverage, 8.6 percent enrolled in the demonstration, compared to 4.7 percent of those who had such coverage. Since benefits in employer- sponsored health plans often resemble FEHBP benefits, retirees with employer-sponsored coverage would have been less likely to find FEHBP plans attractive. Retirees with another type of alternative coverage, Medigap, responded differently to the demonstration. Unlike the pattern with other types of insurance coverage, more of those with a Medigap plan enrolled (9.3 percent) than did those without Medigap (5.6 percent). Medigap plans generally offered fewer benefits than a demonstration FEHBP plan, but at the same or higher cost to the retiree. Seven of the 10 types of Medigap plans available to those eligible for the demonstration do not cover prescription drugs. As a result of these differences, retirees who were covered by Medigap policies would have had an incentive to enroll instead in a demonstration FEHBP plan, which offered drug coverage and other benefits at a lower premium cost than the most popular Medigap plan. Like the lack of Medicare+Choice or employer-sponsored coverage, lack of nearby MTF care stimulated enrollment. While living more than an hour from an MTF was associated with higher demonstration enrollment, MTF care may have served some retirees as a satisfactory supplement to Medicare-covered care, making demonstration FEHBP plans less attractive to them. Of eligible retirees who knew of the demonstration and lived within 1 hour of an MTF, 3.7 percent enrolled, compared to 11.1 percent of those who lived more than 1 hour away. Higher enrollment in Puerto Rico than on the mainland reflected in part the more widespread lack of satisfactory alternative health coverage in Puerto Rico compared to the mainland. In Puerto Rico, of those who knew of the demonstration, the share of eligible retirees with employer- sponsored health coverage (14 percent) was about half that on the mainland (27 percent). In addition, before September 2001, no Medicare+Choice plan was available in Puerto Rico. By contrast, in mainland sites where Medicare+Choice plans were available, their attractive cost sharing and other benefits discouraged retirees from enrolling in demonstration plans. Other factors associated with Puerto Rico’s high enrollment and cited by enrollees there included the demonstration plan’s better benefits package—especially prescription drug coverage—compared to many retirees’ alternatives, the demonstration plan’s broader choice of doctors, and the plan’s reputation for quality of care. The premiums charged by the demonstration plans varied widely, reflecting differences in how they dealt with the concern that the demonstration would attract a disproportionate number of sick, high-cost enrollees. To address these concerns, plans generally followed one of two strategies. Most plans charged higher premiums than those they charged to their civilian FEHBP enrollees—a strategy that could have provided a financial cushion and possibly discouraged enrollment. A small number of plans set premiums at or near their premiums for the civilian FEHBP with the aim of attracting a mix of enrollees who would not be disproportionately sick. Plans’ underlying concern that they would attract a sicker population was not borne out. In the first year of the demonstration, for example, on average health care for demonstration retirees was 50 percent less expensive per enrollee than the care for their civilian FEHBP counterparts. Demonstration plans charged widely varying premiums to enrollees, with the most popular plans offering some of the lowest premiums. In 2000, national plans’ monthly premiums for individual coverage ranged from $65 for Blue Cross Blue Shield to $208 for the Alliance Health Plans. Among local plans—most of which were managed care—monthly premiums for individual coverage ranged from $43 for NYLCare Health Plans of the Southwest to $280 for Aetna U.S. Healthcare. Not surprisingly, few enrollees selected the more expensive plans. The two most popular plans were Blue Cross Blue Shield and Triple-S; the latter offered a POS in Puerto Rico. Both plans had relatively low monthly premiums—the Triple-S premium charged to individuals was $54 in the demonstration’s first year. Average premiums for national plans were about $20 higher than for local plans, which were largely managed care plans. (See table 2.) Some plans in the demonstration were well known in their market areas, while others—especially those open only to government employees— likely had much lower name recognition. Before the demonstration started, OPM officials told us that they expected beneficiaries to be unfamiliar with many of the plans included in the demonstration. These officials said that beneficiaries were likely to have only experience with or knowledge of Blue Cross Blue Shield and, possibly, some local HMOs. The success of Blue Cross Blue Shield relative to other national plans in attracting enrollees appears to support their view, as does Triple-S’s success in Puerto Rico, where it is one of the island’s largest insurers. In 2000, Blue Cross Blue Shield was the most popular plan in the demonstration, with 42 percent of demonstration-wide enrollment and 68 percent of enrollment on the mainland. Among national plans, the GEHA Benefit Plan (known as GEHA) was a distant second with 4 percent of enrollment. The other five national plans together captured less than 1 percent of all demonstration enrollment. Among local plans, Triple-S was most successful, capturing 96 percent of enrollment in Puerto Rico and 38 percent of enrollment demonstration-wide. The other local plans, taken together, accounted for about 14 percent of demonstration-wide enrollment. Several factors contributed to plans’ concern that they would attract sicker—and therefore more costly—enrollees in the demonstration. Plans did not have the information that they usually use to set premiums— claims history for fee-for-service plans and premiums charged to comparable private sector groups for managed care plans. Moreover, according to officials, some plans were reluctant to assume that demonstration enrollees would be similar to their counterparts in the civilian FEHBP. A representative from one of the large plans noted that the small size of the demonstration was also a concern. The number of people eligible for the demonstration (approaching 140,000, when the demonstration was expanded in 2001) was quite small compared to the number of people in the civilian program (8.5 million in 2001). If only a small number of people enrolled in a plan, one costly case could result in losses, because claims could exceed premiums. In response to the concern that the demonstration might attract a disproportionate number of sick enrollees, plans developed two different strategies for setting premiums. Plans in one group, including Blue Cross Blue Shield and GEHA, kept their demonstration premiums at or near those they charged in the civilian FEHBP. Representatives of one plan explained that it could have priced high, but they believed that would have resulted in low enrollment and might have attracted a disproportionate number of sick—and therefore costly—enrollees. Instead, by keeping their premium at the same level as in the civilian program, these plan officials hoped to make their plan attractive to those who were in good health as well as to those who were not. Such a balanced mix of enrollees would increase the likelihood that a plan’s revenues would exceed its costs. By contrast, some plans charged higher premiums in the demonstration— in some cases, 100 percent higher—than in the civilian FEHBP. Setting higher premiums might provide plans with a financial cushion to deal with potential high-cost enrollees. While higher premiums might have discouraged enrollment and reduced plans’ exposure to high-cost patients, this strategy carried the risk that those beneficiaries willing to pay very high premiums might be sick, high-cost patients. More than four-fifths of plans chose the second strategy, charging higher premiums in the demonstration than in the civilian FEHBP. In 2000, only two plans—both local plans—charged enrollees less in the demonstration than in the civilian program for individual, standard option policies; these represented about 6 percent of all plans. By contrast, three plans—about 9 percent of all plans—set premiums at least twice as high as premiums in the civilian FEHBP. (See fig. 4.) The demonstration did not attract sicker, more costly enrollees—instead, military retirees who enrolled were less sick on average than eligible nonenrollees. We found that, as scored by a standard method to assess patients’ health, older retirees who enrolled in the demonstration were an estimated 13 percent less sick than eligible nonenrollees. At each site enrollees were, on average, less sick than nonenrollees. In the GAO-DOD- OPM survey, fewer enrollees on the U.S. mainland (33 percent) reported that they or their spouses were in fair or poor health compared to nonenrollees (40 percent). Retirees who enrolled in demonstration plans had scores that indicated they were, on average, 19 percent less sick than civilian FEHBP enrollees in these plans. Plans’ divergent strategies for setting premiums resulted in similar mixes of enrollees. Blue Cross Blue Shield and GEHA, both of which did not increase premiums, attracted about the same proportion of individuals in poor health as plans on the mainland that raised premiums. During 2000, the first year of the demonstration, enrolled retirees’ health care was 28 percent less expensive—as measured by Medicare claims— than that of eligible nonenrolled retirees and one-third less expensive than that of their FEHBP counterparts. (See table 3.) The demonstration enrollees’ average age (71.8 years) was lower than eligible nonenrollees’ average age (73.1 years), which in turn was lower than the average age of civilian FEHBP retirees (75.2 years) in the demonstration areas. OPM has obtained from the three largest plans claims information that includes the cost of drugs and other services not covered by Medicare. These claims show a similar pattern: Demonstration enrollees were considerably less expensive than enrollees in the civilian FEHBP. Although demonstration enrollees’ costs were lower than those of their FEHBP counterparts in the first year, demonstration premiums generally remained higher than premiums for the civilian FEHBP. In 2001, the second year of the demonstration, only a limited portion of the first year’s claims was available when OPM and the plans negotiated the premiums, so the lower demonstration costs had no effect on setting 2001 premiums. Demonstration premiums in 2001 increased more rapidly than the civilian premium charged by the same plans: a 30 percent average increase in the demonstration for individual policies compared to a 9 percent increase for civilians in the same plans. In 2002, the third year, when both the plans and OPM were able to examine a complete set of claims for the first year before setting premiums, the pattern was reversed: On average, the demonstration premiums for individual policies fell more than 2 percent while the civilian premiums rose by 13 percent. However, on average, 2002 premiums remained higher in the demonstration than in the civilian FEHBP. Blue Cross Blue Shield was an exception, charging a higher monthly premium for an individual policy to civilian enrollees ($89) in 2000 than to demonstration enrollees ($74). Because the demonstration was open to only a small number of military retirees—and only a small fraction of those enrolled—the demonstration had little impact on DOD, nonenrollees, and MTFs. However, the impact on enrolled retirees was greater. If the FEHBP option were made permanent, the impact on DOD, nonenrollees, and MTFs would depend on the number of enrollees. Because of its small size, the demonstration had little impact on DOD’s budget. About 140,000 of the more than 8 million people served by the DOD health system were eligible for the demonstration in its last 2 years. Enrollment at its highest was 7,521—about 5.5 percent of eligible beneficiaries. DOD’s expenditures on enrollees’ premiums that year totaled about $17 million—roughly 0.1 percent of its total health care budget. Under the demonstration, DOD was responsible for about 71 percent of each individual’s premium, whereas under TFL it is responsible for the entire cost of roughly similar Medicare supplemental coverage. Probably because of its small size, the demonstration had no observable impact on either the ability of MTFs to assist in the training and readiness of military health care personnel or on nonenrollees’ access to MTF care. Officials at the four MTFs in demonstration sites told us that they had seen no impact from the demonstration on either MTFs or nonenrollees’ access to care. Since enrollees were typically attracted to the demonstration by both its benefits and its relatively low costs, the impact on those who enrolled was necessarily substantial. In the first 2 years, the demonstration provided enrollees with better supplemental coverage, which was less costly or had better benefits, or both. In the third year of the demonstration, after TFL and the retirees’ pharmacy benefit were introduced and enrollment declined, the number of beneficiaries affected by the demonstration decreased. TFL entitled military retirees to low-cost, comprehensive coverage, making the more expensive FEHBP unattractive. The average enrollee premium for an individual policy in the demonstration’s third year was $109 per month. In comparison, to obtain similar coverage under the the combined TFL-pharmacy benefit, the only requirement was to pay the monthly Medicare part B premium of $54. Further, pharmacy out-of- pocket costs under TFL are less than those in the most popular FEHBP plan. The impact on DOD of a permanent FEHBP option for military retirees nationwide would depend on the number of retirees who enrolled. For example, if the same percentage of eligible retirees who enrolled in 2002— after TFL and the retirees’ pharmacy benefit were introduced—enrolled in FEHBP, enrollment would be roughly 20,000 of the more than 1.5 million military retirees. As retirees’ experience with TFL grows, their interest in an FEHBP alternative may decline further. As long as enrollment in a permanent FEHBP option remains small, the impact on DOD’s ability to provide care at MTFs and on MTF readiness would also likely be small. We provided DOD and OPM with the opportunity to comment on a draft of this report. In its written comments DOD stated that, overall, it concurred with our findings. However, DOD differed with our description of the demonstration’s impact on DOD’s budget as small. In contrast, DOD described these costs of the 3-year demonstration–$28 million for FEHBP premiums and $11 million for administration—as substantial. While we do not disagree with these dollar-cost figures and have included them in this report, we consider them to be small when compared to DOD’s health care budget, which ranged from about $18 billion in fiscal year 2000 to about $24 billion in fiscal year 2002. For example, as we report, DOD’s premium costs for the demonstration during 2001, when enrollment peaked, were about $17 million—less than 0.1 percent of DOD’s health care budget. Although DOD’s cost per enrollee in the demonstration was substantial, the number of enrollees was small, resulting in the demonstration’s total cost to DOD being small. DOD’s comments appear in appendix VI. DOD also provided technical comments, which we incorporated as appropriate. OPM declined to comment. We are sending copies of this report to the Secretary of Defense and the Director of the Office of Personnel Management. We will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-7101. Other GAO contacts and staff acknowledgments are listed in appendix VII. To determine why those eligible for the Federal Employees Health Benefits Program (FEHBP) demonstration enrolled or did not enroll in an FEHBP plan, we co-sponsored with the Department of Defense (DOD) and the Office of Personnel Management (OPM) a mail survey of eligible beneficiaries—military retirees and others eligible to participate in the demonstration. The survey was fielded during the first year of the demonstration, from May to August 2000, and was sent to a sample of eligible beneficiaries, both those who enrolled and those who did not enroll, at each of the eight demonstration sites operating at that time. The survey was designed to be statistically representative of eligible beneficiaries, enrollees, nonenrollees, and sites, and to facilitate valid comparisons between enrollees and nonenrollees. In constructing the questionnaire, we developed questions pertaining to individuals’ previous use of health care services, access to and satisfaction with care, health status, knowledge of the demonstration, reasons for enrolling or not enrolling in the demonstration, and other topics. Because eligible beneficiaries could choose FEHBP plans that also covered their family members, we included questions about spouses and dependent children. DOD and OPM officials and staff members from Westat, the DOD subcontractor with responsibility for administering the survey, provided input on the questionnaire’s content and format. After pretesting the questionnaire with a group of military retirees and their family members, the final questionnaire included the topic areas shown in table 4. We also produced a Spanish version of the questionnaire that was mailed to beneficiaries living in Puerto Rico. Working with DOD, OPM, and Westat, we defined the survey population as all persons living in the initial eight demonstration sites who were eligible to enroll in the demonstration. The population included military retirees, their spouses and dependents, and other eligible beneficiaries, such as unremarried former spouses, designated by law. We drew the survey sample from a database provided by DOD that listed all persons eligible for the demonstration as of April 1999. We stratified the sample by the eight demonstration sites and by enrollment status—enrollees and nonenrollees. Specifically, we used a stratified two-stage design in which households were selected within each of the 16 strata and one eligible person was selected from each household. For the enrollee sample, we selected all enrollees who were the sole enrollee in their households. In households with multiple enrollees, we randomly selected one enrollee to participate. For the nonenrollee sample, first we randomly selected a sample of households from all nonenrollee households and then randomly selected a single person from each those households. We used a modified equal allocation approach, increasing the size of the nonenrollee sample in steps, bringing it successively closer to the sample size that would be obtained through proportional allocation. This modified approach produced the best balance in statistical terms between the gain from the equal allocation approach and the gain from the proportional allocation approach. If both an enrollee and a nonenrollee were selected from the same household, the nonenrollee was dropped from the sample and a different nonenrollee was selected. We adjusted the nonenrollee sample size to take account of expected nonresponse. Our final sample included 1,676 out of 2,507 enrollees and 3,971 out of 66,335 nonenrollees. Starting with an overall sample of 5,647 beneficiaries, we obtained usable questionnaires from 4,787 people—an overall response rate of 85 percent. (See table 5.) Response rates varied across sites, from 76 percent to 85 percent among nonenrollees, and from 92 percent to 98 percent among enrollees. (See table 6.) At each site, enrollees responded at higher rates than nonenrollees. Each of the 16 strata was weighted separately to reflect its population. The enrollee strata were given smaller sampling weights, reflecting enrollees’ higher response rates and the fact that they were sampled at a higher rate than nonenrollees. The weights were also adjusted to reflect the variation in response rates across sites. Finally, the sampling weights were further adjusted to reflect differences in response rates between male and female participants in 8 strata. In this appendix, we describe the data, methods, and models used to (1) analyze the factors explaining how beneficiaries knew about the demonstration and why they enrolled in it, (2) assess the health of beneficiaries and civilian FEHBP enrollees, and (3) obtain the premiums of Medigap insurance in the demonstration areas. Our approach to analyzing eligible beneficiaries’ behavior involved two steps: first, analyzing the factors related to whether eligible beneficiaries knew about the demonstration, and second, analyzing the factors related to whether those who knew about the demonstration decided to enroll. Knowledge about the demonstration. To account for differences in beneficiaries’ knowledge about the demonstration, we used individual- level variables as well as variables corresponding to individual sites. These individual-level categories were demographic and economic variables, such as age and income; health status; other sources of health coverage, such as having employer-sponsored health insurance; and military-related factors. The inclusion of site variables allowed the model to take account of differences across the different sites in beneficiaries’ knowledge about the demonstration. We analyzed the extent to which these variables influenced beneficiaries’ knowledge about the demonstration using a logistic regression—a standard statistical method of analyzing an either/or (binary) variable. This method yields an estimate of each factor’s effect, controlling for the effects of all other factors in the regression. In our analysis, either a retiree knew about the demonstration or did not. The logistic regression predicts the probability that a beneficiary knew about the demonstration, given information about the person’s traits—for example, over age 75, had employer-sponsored health insurance, and so on. The coefficient on each variable measures its effect on beneficiaries’ knowledge. These coefficients pertain to the entire demonstration population, not just those beneficiaries in our survey sample. To make the estimates generalizable to the entire eligible population, we applied sample weights to all observations. In view of the large difference in enrollment between the mainland sites and Puerto Rico, we tested whether the same set of coefficient estimates was appropriate for the mainland sites and the Puerto Rico site. Our results showed that the coefficient estimates for the mainland and for Puerto Rico were not significantly different (at the 5 percent level), so it was appropriate to estimate a single logistic regression model for all sites. Table 7 shows for each variable its estimated effect on knowledge, as measured by the variable’s coefficient and odds ratio. The odds ratio expresses how much more likely—or less likely—it is that a person with a particular characteristic knows about the demonstration, compared to a person without that characteristic. The odds ratio is based on the coefficient, which indicates each explanatory variable’s estimated effect on the dependent variable, holding other variables constant. For the mainland sites, retirees were more likely to know about the demonstration if they were male, were married, were officers, were covered by employer- sponsored health insurance, lived less than an hour from a military treatment facility (MTF), or belonged to military retiree organizations. Retirees were less likely to know about the demonstration if they were African American; were older than age 75; or lived in Camp Pendleton, California, Dallas, Texas, or Fort Knox, Kentucky. Decision to enroll in the demonstration. To account for a retiree’s decision to enroll or not to enroll, we considered four categories of individual-level variables similar to those in the “knowledge of the demonstration” regressions, and a site-level variable for Puerto Rico. We also introduced a set of health insurance factors pertaining to the area in which the retiree lived—the premium for a Medigap policy and the proportion of Medicare beneficiaries in a retiree’s county of residence enrolled in a Medicare+Choice plan. In our logistic regression analysis of enrollment, we included only those people who knew about the demonstration. Despite the large enrollment differences between the mainland sites and Puerto Rico, our statistical tests determined that the mainland sites and the Puerto Rico site could be combined into a single logistic regression of enrollment. We included a variable for persons in the Puerto Rico site. (See table 8.) We found that retirees were less likely to enroll in the demonstration if they were African American, enrolled in Medicare+Choice plans, had employer-sponsored health insurance, lived in areas with a high proportion of Medicare beneficiaries enrolled in a Medicare+Choice plan, lived in areas where Medigap was more expensive, or lived less than an hour from an MTF. Retirees who had higher incomes, were officers, were members of a military retiree organization, were enrolled in Medicare part B, lived in Puerto Rico, or were covered by a Medigap policy were more likely to enroll. We estimated what the demonstration’s enrollment rate would have been in 2000 if everyone eligible for the demonstration had known about it. For the 54 percent of retirees who did not know about the demonstration, we calculated their individual probabilities of enrollment, using their characteristics (such as age) and the coefficient estimates from the enrollment regression. Aggregating these individual estimated enrollment probabilities, we found that if all eligible retirees had known about the demonstration, enrollment in 2000 would have been 7.2 percent of eligible beneficiaries, compared with actual enrollment of 3.6 percent. To measure the health status of retired enrollees and nonenrollees, as well as of civilian FEHBP enrollees, we calculated scores for individuals using the Principal Inpatient Diagnostic Cost Group (PIP-DCG) method. This method—used by the Centers for Medicare & Medicaid Services (CMS) in adjusting Medicare+Choice payment rates—yielded a proxy for the healthiness of military and civilian retirees as of 1999, the year before the demonstration. The method relates individuals’ diagnoses to their annual Medicare expenditures. For example, a PIP-DCG score of 1.20 indicates that the individual is 20 percent more costly than the average Medicare beneficiary. In our analysis, we used Medicare claims and other administrative data from 1999 to calculate PIP-DCG scores for eligible military retirees and their counterparts in the civilian FEHBP in the demonstration sites. Using Medicare part A claims for 1999, we calculated PIP-DCG scores for Medicare beneficiaries who were eligible for the demonstration. We used a DOD database to identify enrollees as well as those who were eligible for the demonstration but did not enroll. We also calculated PIP-DCG scores based on 1999 Medicare claims for each Medicare-eligible person enrolled in the civilian FEHBP. We obtained from OPM data on enrollees in the civilian FEHBP and on the plans in which they were enrolled. We restricted our analysis to those Medicare- eligible civilian FEHBP enrollees who lived in a demonstration site. Results of PIP-DCG calculations. We compared the PIP-DCG scores of demonstration enrollees with those of eligible retirees who did not enroll. In every site, the average PIP-DCG score was significantly less for demonstration enrollees than for those who did not enroll. We also compared the PIP-DCG scores of those enrolled in the demonstration with those enrolled in the civilian FEHBP: For every site, these scores were significantly less for demonstration enrollees than for their counterparts in the civilian FEHBP. (See table 9.) We compiled data from Quotesmith Inc. to obtain a premium price for Medigap plan F in each of the counties in the eight demonstration sites. We collected the lowest premium quote for a Medigap plan F policy for each sex at 5-year intervals: ages 65, 70, 75, 80, 85, and over 89. A person age 65 to 69 was assigned the 65-year-old’s premium, a person age 70 to 74 was assigned the 70-year-old’s premium, and so on. Using these data, we assigned a Medigap plan F premium to each survey respondent age 65 and over, according to the person’s age, sex, and location. Tables 10, 11, and 12 show enrollment rates by site and for the U.S. mainland sites as a whole for each year of the demonstration, 2000 through 2002. The program for informing and educating eligible beneficiaries about the demonstration was modeled on OPM’s approach to informing eligible civilian beneficiaries about FEHBP. Elements of OPM’s approach include making available a comparison of FEHBP plans and holding health fairs sponsored by individual federal agencies. DOD expanded upon the OPM approach–for example, by sending postcards to inform eligible beneficiaries about the demonstration because they, unlike civilian federal employees and retirees, were unlikely to have any prior knowledge of FEHBP. In addition, DOD established a bilingual toll-free number. During the first year’s enrollment period, DOD adjusted its information and education effort, for example, by changing the education format from health fairs to town meetings designed specifically for demonstration beneficiaries. In the second year of the demonstration, DOD continued with its revised approach. In the third year, after TRICARE For Life (TFL) began, DOD significantly reduced its information program but continued to mail information to all eligible beneficiaries. It limited town meetings to Puerto Rico, the only site where enrollment remained significant during the third year. DOD sent a series of mailings to all eligible beneficiaries. These included a postcard announcing the demonstration, mailed in August 1999, that alerted beneficiaries to the demonstration–the returned postcards allowed DOD to identify incorrect mailing addresses and to target follow-up mailings to beneficiaries with correct addresses; an OPM-produced booklet, The 2000 Guide to Federal Employees Health Benefits Plans Participating in the DOD/FEHBP Demonstration Project, received by all eligible retirees from November 3 through 5, 1999, that contained information on participating FEHBP plans, including coverage and consumer satisfaction; a trifold brochure describing the demonstration, which was mailed on September 1 and 4, 1999; and a list of Frequently Asked Questions (FAQ) explaining how Medicare and FEHBP work together. At the time of our survey, after the first year’s information campaign, over half of eligible beneficiaries were unaware of the demonstration. Among those who knew about it, more recalled receiving the postcard than recalled receiving any of the later materials—although the FAQ was cited more often as being useful. (See table 13.) Initially, the health fairs that DOD sponsored for military bases’ civilian employees were its main effort—other than the mailings—to provide information about the demonstration to eligible beneficiaries. At these health fairs, plans set up tables at which their representatives distributed brochures and answered questions. At one site, the military base refused to allow the demonstration representatives to participate in its health fair because of concern about an influx of large numbers of demonstration beneficiaries. At another site, the turnout exceeded the capacity of the plan representatives to deal with questions and DOD officials told us that they accommodated more people by giving another presentation at a different facility or at the same facility 1 month later. A DOD official discovered, however, that it was difficult to convey information about the demonstration to large numbers of individuals at the health fairs. DOD officials determined that the health fairs were not working well, so by January 2000, DOD replaced them with 2-hour briefings, which officials called town meetings. In these meetings, a DOD representative explained the demonstration during the first hour and then answered questions from the audience. A DOD official told us that these town meetings were more effective than the health fairs. For the first year of the demonstration, just under 6 percent of those eligible attended either a health fair or a town meeting. The number of eligible beneficiaries who reported attending these meetings varied considerably by site—from about 3 percent in New Orleans and Camp Pendleton to 4 percent in Fort Knox and 18 percent in Humboldt County. Roughly 11 percent of beneficiaries reported attending in Puerto Rico, the site with the highest enrollment. DOD also established a call center and a Web site to inform eligible beneficiaries about the demonstration. The call center, which was staffed by Spanish and English speakers, answered questions and sent out printed materials on request. In the GAO-DOD-OPM survey, about 18 percent of those who knew about the demonstration reported calling the center’s toll- free number. The proportion that called the toll-free number was much higher among subsequent enrollees (77 percent) than among nonenrollees who knew about the demonstration (13 percent). The Web site was another source of information about the demonstration. Although less than half of eligible beneficiaries knew about the demonstration, most of those who did know said they obtained their information from DOD’s mailings. Other important sources of information included military retiree and military family organizations and FEHBP plans. (See table 14.) Nearly all of enrollees (93 percent) and more than half of nonenrollees who said they considered enrolling in an FEHBP health plan (55 percent) reported that they had enough information about specific plans to make an informed decision about enrolling in one of them. More than three-fifths of these beneficiaries who enrolled or considered enrolling in an FEHBP plan said they used The 2000 Guide to FEHBP Plans Participating in the DOD/FEHBP Demonstration Project as a source of information. Other major sources of information were the plans’ brochures and DOD’s health fairs and town meetings. More than 18 percent of those who considered joining did not obtain information about any specific plan. (See table 15.) Table 16 shows reasons cited by enrollees for enrolling in a DOD-FEHBP health plan in 2000, and table 17 shows reasons cited by nonenrollees for not enrolling. Major contributors to this work were Michael Kendix, Robin Burke, Jessica Farb, Martha Kelly, Dae Park, and Michael Rose. Defense Health Care: Oversight of the Adequacy of TRICARE’s Civilian Provider Network Has Weaknesses. GAO-03-592T. Washington, D.C.: March 27, 2003. Federal Employees’ Health Benefits: Effects of Using Pharmacy Benefit Managers on Health Plans, Enrollees, and Pharmacies. GAO-03-196. Washington, D.C.: January 10, 2003. Federal Employees’ Health Plans: Premium Growth and OPM’s Role in Negotiating Benefits. GAO-03-236. Washington, D.C.: December 31, 2002. Medicare+Choice: Selected Program Requirements and Other Entities’ Standards for HMOs. GAO-03-180: Washington, D.C.: October 31, 2002. Medigap: Current Policies Contain Coverage Gaps, Undermine Cost Control Incentives. GAO-02-533T. Washington, D.C.: March 14, 2002. Medicare Subvention Demonstration: Pilot Satisfies Enrollees, Raises Cost and Management Issues for DOD Health Care. GAO-02-284. Washington, D.C.: February 11, 2002. Retiree Health Insurance: Gaps in Coverage and Availability. GAO-02- 178T. Washington, D.C.: November 1, 2001. Medigap Insurance: Plans Are Widely Available but Have Limited Benefits and May Have High Costs. GAO-01-941. Washington, D.C.: July 31, 2001. Health Insurance: Proposals for Expanding Private and Public Coverage. GAO-01-481T. Washington, D.C.: March 15, 2001. Defense Health Care: Pharmacy Copayments. GAO/HEHS-99-134R. Washington, D.C.: June 8, 1999. Federal Health Programs: Comparison of Medicare, the Federal Employees Health Benefits Program, Medicaid, Veterans’ Health Services, Department of Defense Health Services, and Indian Health Services. GAO/HEHS-98-231R. Washington, D.C.: August 7, 1998. Defense Health Care: Offering Federal Employees Health Benefits Program to DOD Beneficiaries. GAO/HEHS-98-68. Washington, D.C.: March 23, 1998.
Prior to 2001, military retirees who turned age 65 and became eligible for Medicare lost most of their Department of Defense (DOD) health benefits. The DOD-Federal Employees Health Benefits Program (FEHBP) demonstration was one of several demonstrations established to examine alternatives for addressing retirees' lack of Medicare supplemental coverage. The demonstration was mandated by the Strom Thurmond National Defense Authorization Act for Fiscal Year 1999 (NDAA 1999), which also required GAO to evaluate the demonstration. GAO assessed enrollment in the demonstration and the premiums set by demonstration plans. To do this, GAO, in collaboration with the Office of Personnel Management (OPM) and DOD, conducted a survey of enrollees and eligible nonenrollees. GAO also examined DOD enrollment data, Medicare and OPM claims data, and OPM premiums data. Enrollment in the DOD-FEHBP demonstration was low, peaking at 5.5 percent of eligible beneficiaries in 2001 (7,521 enrollees) and then falling to 3.2 percent in 2002, after the introduction of comprehensive health coverage for all Medicare-eligible military retirees. Enrollment was considerably greater in Puerto Rico, where it reached 30 percent in 2002. Most retirees who knew about the demonstration and did not enroll said they were satisfied with their current coverage, which had better benefits and lower costs than the coverage they could obtain from FEHBP. Some of these retirees cited, for example, not being able to continue getting prescriptions filled at military treatment facilities if they enrolled in the demonstration. For those who enrolled, the factors that encouraged them to do so included the view that FEHBP offered retirees better benefits, particularly prescription drugs, than were available from their current coverage, as well as the lack of any existing coverage. Monthly premiums charged to enrollees for individual policies in the demonstration varied widely--from $65 to $208 in 2000--with those plans that had lower premiums and were better known to eligible beneficiaries, capturing the most enrollees. In setting premiums initially, plans had little information about the health and probable cost of care for eligible beneficiaries. Demonstration enrollees proved to have lower average health care costs than either their counterparts in the civilian FEHBP or those eligible for the demonstration who did not enroll. Plans enrolled similar proportions of beneficiaries in poor health, regardless of whether they charged higher, lower, or the same premiums for the demonstration as for the civilian FEHBP. In commenting on a draft of the report, DOD concurred with the overall findings but disagreed with the description of the demonstration's impact on DOD's budget as small. As noted in the draft report, DOD's costs for the demonstration relative to its total health care budget were less than 0.1 percent of that budget. OPM declined to comment.
Congress established the MHPI in 1996 to provide an alternative funding mechanism to ensure adequate military family housing was available when needed by renovating existing inadequate housing and constructing new homes on and around military bases. The Department of the Army currently has 34 MHPI projects at 44 installations in the United States. Since these projects began, the Army has invested $1.97 billion and the private sector has invested $12.6 billion in the initial development of the military housing projects. In a typical privatized military housing project, the developer is a limited liability company or partnership that has been formed for the purpose of acquiring debt, leasing land, and building and managing a specific project or projects. The limited liability company is typically composed of one or several private-sector members, such as construction firms, real-estate managers, or other entities with expertise in housing construction and renovation. In those cases where a military department has made an investment in the limited liability company, the department may also be a member of the limited liability company. In a typical privatized military housing project, a military department leases land to a developer for a term of 50 years. The military department generally conveys existing homes on the leased land to the developer for the duration of the lease. The developer is responsible for constructing new homes or renovating existing houses and then leasing this housing, giving preference to service members and their families. Although the developers enter into these agreements to construct or renovate military housing, the developer normally enters into various contracts with design builders and subcontractors to carry out the actual construction and renovation. The developer also typically hires a property-management firm to oversee the day-to-day operations of the MHPI project, such as ensuring that maintenance is provided to houses in accordance with the approved budget. According to Army officials, the only litigation that has caused the expenditure of funds not accounted for during the MHPI’s annual budget process for operating costs to-date is litigation involving Clark Realty Capital (Clark) and Pinnacle Property Management (Pinnacle). Clark Pinnacle Family Communities oversees some of the highest-profile installations in the Army’s MHPI program. The company is a joint venture between Clark, based in the Washington, D.C., area, and Pinnacle, based in Seattle. Starting in 2002, in collaboration with the Army, Clark Pinnacle led the development of four projects in six locations totaling more than 11,000 homes at a value of about $2 billion. The four projects are Presidio of Monterey, and Naval Post Graduate School, California; Fort Irwin, Moffett Federal Airfield, and Parks Reserve Forces Training Fort Benning, Georgia; and Fort Belvoir, Virginia. Although the agreements at the projects vary, generally Clark is the managing partner of the MHPI entities and handled the construction and development. Pinnacle was the property-management firm actually conducting day-to-day property-management activities (e.g., maintenance) at the projects once they were completed. According to Army officials, Clark, through a series of internal audits, determined in 2010 that Pinnacle allegedly was involved in substantial and systemic fraud in the management of the privatized housing at Fort Benning, and ultimately found similar alleged fraud in the management of the privatized housing at Fort Belvoir. As a result, Clark initiated audits of the two California MHPI projects managed by Pinnacle and began to uncover alleged circumstances similar to those at Fort Benning and Fort Belvoir. In 2010, Clark asked the Army for permission to remove Pinnacle as property manager at Fort Benning and Fort Belvoir because of alleged willful misconduct by Pinnacle employees, and for approval to initiate related litigation on behalf of the MHPI entities against Pinnacle; the Army agreed. Subsequent to the initiation of the Fort Belvoir and Fort Benning litigation, Pinnacle attempted to unilaterally amend the terms of the California property-management agreements to make it harder to remove them as property manager. In response, the California MHPI entities then brought suit in California court against Pinnacle seeking a declaratory judgment that the agreements had not been effectively amended by Pinnacle. Pinnacle then filed a cross-suit seeking to uphold the amendments. According to Army officials, the Army wanted Pinnacle removed as the MHPI projects’ property manager due to the alleged fraud and mismanagement. Additionally, Army officials stated that they have been motivated by concerns for resident safety because it has been alleged that Pinnacle engaged in falsifying records regarding maintenance and repairs that Pinnacle employees were responsible for performing at all four project locations. An Army official stated that Pinnacle has attempted to obtain information on the amount of funds the MHPI projects have spent on litigation and other litigation strategy-type information, such as documents provided to the Army by Clark, both through discovery and through a Freedom of Information Act request, which the Army (with Department of Justice assistance) successfully denied. According to Army officials, the relevant property-management agreements include a provision that a party who sues under the agreement and substantially prevails is entitled to recoup their legal fees from the losing party. Further, our review found that the MHPI projects’ property-management agreements include a provision allowing the substantially prevailing party in litigation brought to enforce or interpret the agreements to be repaid for all court costs and for the reasonable fees and expenses of attorneys and certified public accountants. Because legal fees are potentially recoverable, they are material both to the litigation and to any potential settlement negotiations. In June 2010, Pinnacle was removed as property manager at Fort Benning, and in December 2012 Pinnacle was removed from management at Fort Belvoir. Pinnacle remains property manager at the two California projects pending resolution of the litigation described above. The Army has a standard process to manage MHPI projects’ funds for the costs of litigation not accounted for in the MHPI projects’ annual budget process, but instead used an alternative process designed to limit access to information about the Pinnacle litigation. The alternative process is consistent with the relevant MHPI projects’ operating agreements. The standard process has thresholds governing potential withdrawals or expenditures for Army MHPI project litigation expenses. In the standard process, Army officials generally make major decisions related to MHPI projects, including litigation costs, by following guidance in the Residential Communities Initiative Portfolio and Asset Management Handbook. For example, the Army treats litigation not accounted for in the budget process as a major decision requiring higher-level approval within the Army when costs exceed either 5 to 10 percent of the annual budget or $250,000 over budget. In the standard process, Army officials generally seek approval of such major decisions from either the Office of the Assistant Chief of Staff for Installation Management or the Office of the Assistant Secretary of the Army (Installations, Energy and Environment). The process involves sharing litigation information and estimated costs between the developer and four offices within the Army (MHPI Project Office, Garrison Commander, Office of the Assistant Chief of Staff for Installation Management, and the Office of the Assistant Secretary of the Army ). However, according to Army officials, the standard process has not yet been used to approve any major decisions regarding litigation expenses, because the Pinnacle cases are the only cases that met the major-decision threshold criteria whose litigation expenses have been approved and would have gone through this process had decisions not been made to restrict access to information pertaining to this litigation. Although the Pinnacle cases met the major-decision threshold criteria, according to Army officials, Army officials decided to use an alternative management process to review and approve litigation costs so they could restrict information and confine decision making to a higher organizational level. This process is consistent with the MHPI projects’ operating agreements for managing these projects and allows for Clark and only one Army office to review associated cost information. Specifically, these agreements do not specify any internal deliberative process within the Army, but rather only require that Army agreement is obtained for certain major decisions. As a result, the Deputy Assistant Secretary of the Army (Installations, Housing & Partnerships), acting on behalf of the Army, can directly approve specific actions proposed by Clark senior leadership on behalf of the MHPI project, such as approving the litigation and audit budget and expenses. Additionally, Army officials stated that while the standard process was not followed, the alternative process did allow for information regarding the Pinnacle litigation to be periodically coordinated with high-level officials within the Office of the Assistant Chief of Staff for Installation Management. Army officials stated that they wanted to restrict access to the litigation and audit cost estimates because legal fees are potentially recoverable and as a result are material both to the litigation and to any potential settlement negotiations. According to Army officials, throughout the litigation process, Army and Clark officials have regularly shared litigation documents and met to discuss the Pinnacle litigation. After the approval of the MHPI project’s annual operating budgets, the Deputy Assistant Secretary of the Army (Installations, Housing & Partnerships) and counsel in the Office of the Army General Counsel reviewed Clark’s proposed budget for Pinnacle litigation and audit expenses for that year. Further, the Army and Clark met approximately quarterly with counsel representing the four MHPI projects in the Pinnacle litigation to discuss any significant developments in the cases, specific plans for the next quarter, and general plans for the rest of the year—including any anticipated changes in the legal and audit expenses previously budgeted for. Army officials stated that they also plan to conduct a full review of the costs at the end of the litigation to ensure that all charges by outside counsel were fair and reasonable. According to Army officials and our analysis of the project-management accounts for the four locations involved in the Pinnacle litigation, the expenditure of funds to pay litigation and audit expenses have not prevented the projects from meeting normal operating requirements, such as conducting maintenance or paying for utilities, from the time the litigation began in 2010. Within each MHPI project, the Army receives revenue and distributes the cash flow in a specified order to accounts, such as the revenue account; operating-expenses account; capital, repair, and replacement account; debt-service account; and construction and reinvestment accounts. Figure 1 shows the flow of funding within the Army MHPI projects. Revenue account: The revenue account is funded by servicemember rent, which is typically based on the Basic Allowance for Housing allotments received. This funding is typically disbursed on a monthly basis to pay the budgeted amounts for the operating expense account; capital, repair, and replacement account; and debt-service account. According to Army officials, Pinnacle litigation and audit expenses were also paid from revenues that flowed into the MHPI projects. Operating-expenses account: Each Army MHPI project has an account to pay for all operating expenses including maintenance, utilities, and other administrative costs. According to Army officials, they assist in the development of and approve each MHPI project’s annual budget for operating expenses. MHPI project asset managers for the four projects connected to Pinnacle litigation stated that their projects have not had to reduce their operating expenses during the Pinnacle litigation. Furthermore, MHPI project asset managers stated that any increases or decreases in budgeted operating expenses from year to year were due to fluctuations in housing occupancy and changes in utility and maintenance costs and not litigation expenses. Table 1 provides a summary of the four MHPI projects’ budgeted operating expenses from calendar years 2009 through 2013. Although Pinnacle litigation and audit expenses were not incurred until 2010, this table shows budgeted operating expenses for 2009 to provide a comparison of expenses prior to the start of litigation. Capital, repair, and replacement account: This account includes funds for repair and replacement of older components of homes and community facilities. The Army requested an audit of the projects’ financial data from January 2009 through June 2012 and the audit results showed that no maintenance was deferred during this period. Debt-service account: This account is used to pay the outstanding debt for the MHPI project. Based on our review of MHPI project account data, we found that all four MHPI projects have little or no balance in this account because debt is paid off throughout the year. Construction account and Reinvestment account: Construction Account—Before the start of an MHPI project, a plan is developed for construction, and needed funding levels are determined. This plan is reviewed annually based on actual and estimated costs to determine if any changes are needed to the development scope of the project. This account is used to pay for the initial development and construction of the MHPI project, which according to Army officials generally lasts during the first 7 to 10 years of project operations. As discussed earlier, the revenue account funds the budgeted amounts for the operating expense account; the capital, repair, and replacement account; and the debt-service account, and any funding not needed for these purposes flows to the construction account. However, because litigation expenses were also paid from the revenue account, officials stated that additional funding has not transferred into the construction account as otherwise would likely have occurred. Nevertheless, Army officials said that the Pinnacle litigation and audit costs have had no effects on the projects’ ability to move forward with construction as planned because these projects were developed within anticipated funding levels. According to Army officials, currently all four of the projects are nearing the end or have recently completed the initial development period, and after the development and approval of a 5-year future plan, the construction account will be closed. Reinvestment Account—According to Army officials, any funds remaining in the construction accounts when the projects reach the end of their initial development and construction phase are moved to the reinvestment account. This account is also used to hold the MHPI projects’ excess cash flow that is not required after payment of the operating expenses, debt service, and other payments. Funds start to accumulate in the reinvestment account for future use in renovation or replacement of homes after the initial development and construction of the project ends. Since all four of the projects are still in or have recently completed this initial development phase—and based on our review of MHPI project account data—no funds have accumulated in the reinvestment accounts as of February 2014. Due to the MHPI projects’ incurring litigation and audit expenses, less funding will ultimately be available to transition from the MHPI projects’ construction account to the reinvestment account unless litigation concludes prior to the transition, and funds are recouped assuming the projects prevail in the litigation with Pinnacle. The MHPI projects’ property-management agreements provide that the party that substantially prevails in a legal action may recoup their legal expenses. Army officials stated that they expect the MHPI projects to prevail in the litigation and recoup most, or even all, the costs of conducting the litigation. This report does not include any recommendations. We provided a draft of this report to DOD for comment. However, DOD did not provide written comments and provided technical comments, which we incorporated in our report as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Army; and the Director, Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or LeporeB@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. In addition to the contact named above, Laura Durland (Assistant Director), Chaneé Gaskin, Stephanie Moriarty, Carol Petersen, Richard Powelson, Amie Steele, John Van Schaik, and Michael Willems made key contributions to this report.
In 1996, Congress enacted the MHPI, which provided the Department of Defense with a variety of authorities that may be used to obtain private-sector financing and management to repair, renovate, construct, and operate military family housing. The Army has invested $1.97 billion and the private sector has invested $12.6 billion in the initial development of MHPI projects at 44 installations. The Senate report accompanying a proposed version of the National Defense Authorization Act for Fiscal Year 2014 mandated GAO to examine the Army’s litigation costs related to MHPI, specifically any litigation costs not accounted for during the MHPI’s annual budget process. This report examines the extent to which the Army has implemented its process to manage funds for litigation not accounted for in the budget and identifies any effects that the litigation and audit costs have had on managing the MHPI projects. To conduct its work, GAO examined the Army’s process for managing litigation, interviewed Army officials, and analyzed documents to determine whether litigation and audit costs have had any effects on managing the MHPI projects. GAO is not making recommendations in this report. DOD provided technical comments on a draft of this report, which were incorporated as appropriate. The Army has a standard process to manage litigation costs of its Military Housing Privatization Initiative (MHPI) projects that are not accounted for in the annual budget process. Army officials indicated that there is one case between four Army MHPI projects and Pinnacle Property Management (Pinnacle) that met the dollar threshold criteria and that would have been approved through this process. However, Army officials did not use the standard process because the Army determined that it needed to limit access to Pinnacle litigation information to avoid disclosing any information material to the litigation strategy. As a result, the Army used an alternative process to review and approve litigation costs for Pinnacle that is consistent with MHPI operating agreements. Had the standard process been followed, litigation and litigation cost information would have been shared with the MHPI projects construction company, Clark Realty Capital (Clark), and four different offices within the Army. Army and Clark officials decided to use the alternative process allowed by the MHPI’s operating agreements so that fewer personnel would be aware of ongoing litigation information involving Pinnacle. The alternative process allows the Army and Clark to directly approve specific actions on behalf of the MHPI project, such as approving litigation and audit expenses, and allows sharing information with only Clark and one Army office. According to Army officials and our analysis of these four MHPI projects’ accounts, Pinnacle litigation expenses have not prevented the projects from meeting their normal operating requirements, such as conducting maintenance or paying for utilities. Rents collected from these four MHPI projects funded the normal operating requirements for these projects as well as the Pinnacle litigation and audit expenses. Rents collected in excess of operating expenses normally are available for other purposes such as construction; capital, repair, and replacement of buildings; and future reinvestment. However, because litigation expenses were also paid from the rents collected at the four MHPI projects involved in the litigation, some funds have not been available for these purposes. Nevertheless, Army officials said that the Pinnacle litigation and audit costs have had no effects on the four projects’ ability to move forward with construction as planned so far or to meet any scheduled capital repair projects because these projects were developed within anticipated funding levels. The Army property-management agreements provide that the party that substantially prevails in a legal action may recoup their legal expenses. Army officials stated that they expect the MHPI projects to prevail in the litigation and recoup most, or even all, the costs of conducting the litigation.
The Communications Act of 1934 sets forth the nation’s telecommunications policy, including making communication services available “so far as possible, to all the people of the United States.” Early efforts by FCC, state regulators, and industry to promote universal service generally began in the 1950s. At that time, increasing amounts of the costs associated with providing local telephone service were recovered from rates for long-distance services. This had the effect of lowering local telephone rates and raising long-distance rates, which was intended to make local telephone service more affordable. Because American Telephone and Telegraph Company (AT&T) provided both nationwide long-distance service and local telephone service to approximately 80 percent of the nation’s telephone subscribers, universal service was largely promoted by shifting costs between different customers and services. Following the divestiture of AT&T’s local telephone companies in 1984, FCC made several changes to universal service policy. First, the costs associated with local telephone service could no longer be shifted internally within AT&T. FCC therefore implemented access charges—fees that long-distance companies pay to originate and terminate long-distance telephone calls over the local telephone network. Access charges were intended to not only recover the cost of originating and terminating long- distance telephone calls over the local telephone networks, but also to subsidize local telephone service. Second, FCC initiated several federal efforts that targeted support to low-income customers to bring the rates for basic telephone service within their reach. At this time, federal universal service was for the most part funded through charges imposed on long-distance companies. The Congress made significant changes to universal service policy through the 1996 Act. First, the 1996 Act provided explicit statutory support for federal universal service policy. Second, the 1996 Act extended the scope of federal universal service—beyond the traditional focus on low-income consumers and consumers in rural and high-cost areas—to include eligible schools, libraries, and rural health care providers. Third, the 1996 Act altered the federal mechanism for funding universal service. Every telecommunications carrier, and other entities, providing interstate telecommunications services were required to contribute to federal universal service, unless exempted by FCC; and their contributions were to be equitable, nondiscriminatory, and explicit. Contributions are deposited into the federal Universal Service Fund (USF), from which disbursements are made for the various federal universal service programs. Fourth, the 1996 Act established a Federal-State Joint Board on Universal Service (Joint Board). This Joint Board, which is composed of three FCC commissioners, four state regulatory commissioners, and a consumer advocate, makes recommendations to FCC on implementing the universal service-related provisions of the 1996 Act. The USF provides support through four different programs, each targeting a particular group of telecommunications users (see table 1). In 2007, support for the four USF programs totaled $7 billion. Among the four programs, the high-cost program accounted for the largest amount of support—$4.3 billion or 62 percent of USF support. The high-cost program provides financial support to carriers operating in high-cost—generally rural—areas in order to offset their costs, thereby allowing these carriers to provide rates and services that are comparable to the rates and services that customers in low-cost—generally urban—areas receive. Both federal and state governments play a role in implementing the federal high-cost program. FCC has overall responsibility for the federal high-cost program, including making and interpreting policy, overseeing the operations of the program, and ensuring compliance with its rules. However, FCC delegated to USAC responsibility to administer the day-to- day operations of the high-cost program. USAC is a not-for-profit corporation and a subsidiary of NECA, although NECA does not participate in the management of USAC. NECA, a not-for-profit association of local telephone carriers and the primary administrator of FCC’s access charge plan, collects cost and line count data from its members and validates this information. At the state level, state regulatory commissions hold the primary responsibility to determine carrier eligibility for participation in the program and to annually certify that carriers will appropriately use high-cost program support. Table 2 summarizes the general roles and responsibilities of the agencies and organizations involved in high-cost program administration. To be eligible to receive high-cost program support, a carrier must be designated as an eligible telecommunications carrier (ETC). Section 214(e)(1) of the 1996 Act requires that to be designated an ETC, the carrier must (1) offer the services that FCC identified as eligible for universal service support throughout the service area for which the designation is received, (2) advertise the availability of those services, and (3) use at least some of its own facilities to deliver those services. There are two types of carriers. Incumbents. When the Congress passed the 1996 Act, existing telephone carriers that were members of NECA were designated as incumbent carriers for their service areas. These incumbents subsequently received ETC status. These incumbents are further classified as either “rural”— generally small carriers serving primarily rural areas—or “nonrural”— generally large carriers serving both rural and urban areas. Competitors. Carriers competing against incumbents—both wireline and wireless—also are eligible to receive high-cost program support. Just like incumbents, these companies must apply for eligibility and receive ETC status before they can receive support; these carriers are referred to as competitive eligible telecommunications carriers (CETC). Competitors can provide service without receiving CETC status or high-cost program support. Carriers that receive high-cost program support may use this support only for the “provision, maintenance, and upgrading of facilities and services for which the support is intended.” The high-cost program consists of five components, each with different eligibility criteria and different methods to determine the level of support. Four of the components provide support to carriers to offset the costs of the network, including local loops (primarily, the equipment that runs from the carrier’s facilities to the customer’s premises). The four components are high-cost loop, high-cost model, interstate access, and interstate common line. The fifth component, local switching support, provides support for very small carriers to offset the cost of their switching equipment. For incumbent carriers, eligibility for the components depends on the carrier’s size (as recognized by their classification as rural or nonrural) and the type of regulation the carrier is subject to—either rate-of-return or price-cap regulation. In 2007, USAC reported that there were 1,250 rural carriers subject to rate-of-return regulation, 105 rural carriers subject to price-cap regulation, 5 nonrural carriers subject to rate-of-return regulation, and 81 nonrural carriers subject to price-cap regulation. Based on the components for which they qualify, rural carriers receive support based on the costs they incurred, whereas based on the components for which they qualify, nonrural carriers receive part of their support based on projected costs using an FCC model. Table 3 summarizes each of the five high-cost program components, which carriers are eligible for each, the qualification criteria for each component, and the amount of support provided in 2007. For example, a rural carrier with less than 50,000 customers and subject to rate-of-return regulation could receive support through the high-cost loop, local switching, and interstate common line components. Unlike incumbents, competitors do not directly receive funds based on their costs or FCC’s model. Rather, once a competitor receives CETC status, it qualifies for the identical per-line level of support that the incumbent receives for the area it serves; this is known as the identical support rule. Since its inception in 1998, the high-cost program has increased nearly 153 percent, from $1.7 billion in 1998 to about $4.3 billion in 2007. This significant growth has raised concerns about the program’s long-term sustainability, efficiency, and effectiveness, as well as the adequacy of the oversight of carriers’ need for and use of support. Figure 1 illustrates the growth in the high-cost program, including both incumbents and competitors. Several factors have contributed to the growth in the high- cost program. In the early years of the program, support grew as FCC reduced access charges (a form of implicit support for carriers) to incumbents and offset those reductions with greater high-cost program support (a form of explicit subsidy). However, in recent years, the high- cost program has grown because of support provided to competitors, especially wireless companies. In response to concerns about the long-term sustainability of the high-cost program, FCC issued several Notices of Proposed Rulemaking (NPRM) in January 2008, seeking comment on proposals for comprehensive reform of the program. These NPRMs represent the culmination of efforts by FCC and the Joint Board to reform the program. FCC released an NPRM seeking comment on a Joint Board recommendation that the high-cost program be divided into three separate funds: broadband service, mobility (or wireless) service, and traditional provider of last resort service. FCC also released an NPRM seeking comment on changing the current funding mechanism for CETCs, namely eliminating the identical support rule and requiring CETCs to submit cost data. Finally, FCC released an NPRM that sought comment on implementing reverse auctions to determine the amount of high-cost program support to be given to an ETC; with reverse auctions, support generally would be determined by the lowest bid to serve the auctioned area. In this report, however, we will not assess or discuss the merits of the reform proposals; instead, we will review the high-cost program in its current state and discuss best practices that are critical for the future of the fund, regardless of which reform efforts are adopted, if any. In addition to these three proposals, on May 1, 2008, FCC released an order adopting an interim cap on high-cost program support for CETCs. FCC adopted the interim cap to stem the growth of the program while it considers these comprehensive reform proposals. Under this order, total annual support for CETCs will be capped at the level of support that they were eligible to receive in each state during March 2008. The high-cost program provides support to eligible carriers in all states, with higher levels of support going to more rural states. However, the high-cost program does not provide support consistently to carriers operating in similar locations, which can lead to different levels of telecommunications service across rural areas. In general, rural carriers receive more support than nonrural carriers. The high-cost program provides support for the provision of basic telephone service and, to a great extent, access to this service is available and widely subscribed to throughout much of the country. But the high-cost program also indirectly supports broadband service in some rural areas, particularly those areas served by rural carriers. Finally, the high-cost program supports competitive carriers, and support for these carriers has increased greatly in recent years. In 2007, carriers in all states received some form of high-cost program support, with higher levels of support going to more rural states. Generally, carriers operating in states with below-average population densities received more support than those in more densely populated states. For example, the five states in which carriers received the greatest amount of support in 2007 include Mississippi ($283 million), Texas ($246 million), Kansas ($222 million), Louisiana ($163 million), and Alaska ($161 million). While the average national population density is 190.1 people per square mile, these states have lower-than-average population densities, ranging from 1.2 people per square mile in Alaska to 98.4 people per square mile in Louisiana. Alternatively, the five states that receive the least support tend to have higher-than-average population densities, including Rhode Island ($31,000), Delaware ($245,000), Connecticut ($1.3 million), New Jersey ($1.7 million), and Massachusetts ($2.3 million); these states have population densities ranging from 436.9 people per square mile in Delaware to 1,176.2 people per square mile in New Jersey. Thus, at a broad, national level, high-cost program support flows to more rural states. However, the high-cost program does not provide support consistently to carriers operating in similar rural locations. To a large extent, this situation arises because of the program’s structure: the five different high- cost program components, each with different eligibility and methods to determine support. As mentioned earlier, rural carriers typically receive high-cost program support through components that base support on the carrier’s incurred costs. Thus, in the case of a rural carrier, the higher its actual costs, the more funding it receives. Because of this, the funding that a rural carrier receives depends on how much money it chooses to spend on its network. Additionally, the disparity between rural and nonrural carrier is even greater, as support to one carrier can be significantly more generous than support provided to another carrier for serving comparable areas. As mentioned earlier, nonrural carriers typically receive part of their high-cost program support (high-cost model) through a component that utilizes an FCC cost model; this model assumes the most efficient carrier providing service to existing customers. Since this model is not based on an individual carrier’s actual costs, investment in new network infrastructure will not lead to greater high-cost program support. Further, the threshold to receive support is greater for nonrural carriers; a rural carrier’s costs must exceed 115 percent of the national average whereas the statewide average cost must exceed approximately 131 percent of the national average for nonrural carriers. As such, nonrural carriers in 10 states currently are eligible for this funding, yet nonrural carriers in other states serve high-cost locations as well. Overall, rural carriers receive more funding than nonrural carriers and in 2007, rural carriers received $1.7 billion more in high-cost program support than nonrural carriers. In November 2007, the Joint Board recognized this situation itself, noting that “support for customers served by one kind of carrier can be significantly more generous than for comparably situated customers served by the other kind of carrier.” We found similar results in our site-visit states. In the 11 states we visited, 9 states had rural carriers receiving more funding than nonrural carriers. While rural carriers generally serve only rural areas, nonrural carriers also can serve large swaths of rural areas. In fact, carriers providing service in similar, even adjacent, areas can receive vastly different levels of high-cost program support. For example, as shown in figure 2, in Wisconsin, there are two nonrural carriers (areas in dark shading) that provide service to rural areas, yet these carriers do not qualify to receive the same types of support as rural carriers serving comparable adjacent areas. Similarly, in Oregon, 25 percent of the lines served by a large, national carrier are located in rural areas of the state; this carrier does not qualify to receive the same type of support for these lines that rural carriers in the same area do. The high-cost program directly and indirectly supports several types of service including (1) basic telephone service, (2) broadband service, and (3) wireless telephone service. Currently, the high-cost program provides support for the provision of basic telephone service. In the 1996 Act, the Congress stated as one of the principles underlying universal service that people in rural, insular, and high-cost areas should have access to telecommunications and information services that are reasonably comparable to those provided in urban areas and at comparable rates. However, the Congress did not define universal service or specify a list of services to be supported by the program. Instead, the 1996 Act recognized universal service as an evolving level of telecommunications services and directed FCC, after recommendations from the Joint Board, to establish a definition of the services to be supported by the program. In 1997, FCC adopted a set of communications services and “functionalities” for rural, insular, and high- cost areas that were to be supported by the high-cost program. To a great extent, access to basic telephone service is available and widely subscribed to throughout much of the country. One widely used measure of telephone subscribership is the penetration rate, which is based on survey data collected by the U.S. Census Bureau to estimate the percentage of U.S. households with telephone service. In 2007, the overall penetration rate in the United States was 95 percent, representing an increase of 0.9 percent since the inception of the high-cost program in 1998. While the penetration rate has increased marginally, there are several factors that could contribute to this result in addition to the high- cost program, including changes in income levels, greater diffusion of communications technology, or state-level programs. Appendix II shows penetration rates by state and changes in the percentage of households with telephone service from November 1983 through July 2007. Most of the rural carriers with whom we spoke agreed that support from the high-cost program has been an important source of their operating revenue. One recent study estimated that 30 percent of rural carriers’ annual operating revenues are derived from federal and state universal service programs, including the high-cost program. Our site visit interviews similarly suggest that rural carriers depend on high-cost program support to provide customers with access to affordable telephone rates. For example, one carrier with whom we spoke received nearly 80 percent of its annual operating revenues from the high-cost program. Moreover, many of the rural carriers we met with told us that they would be unable to provide the same range or quality of service to their customers without support from the high-cost program. Most of these carriers, who serve very remote and sparsely populated rural areas where it is very costly to provide telecommunications services, stated that without the support, they would likely need to increase the rates they charge their customers for basic services. Although there have been a number of proposals to revise the list of services supported by the high-cost program over the past decade, FCC has not taken action to change the original definition. To be eligible to receive high-cost program support, a carrier must offer each of the services and functionalities supported by the program. Additionally, carriers that receive high-cost program support may only use this support for the “provision, maintenance, and upgrading of facilities and services for which the support is intended.” While one of the universal service principles adopted in the 1996 Act is that all regions of the country should have access to advanced telecommunication and information services, the high-cost program does not explicitly support access to broadband services. While access to advanced services, such as broadband, is not included among the designated list of services supported by the high-cost program, the program has indirectly facilitated broadband deployment in many rural areas. In recent years, some carriers have been using high-cost program support to upgrade their telephone networks, including upgrading to fiber optic cable and extending it closer to their customers. Because of advances in telecommunications technology, these upgrades increase the capacity of the network, thereby facilitating the provision of advanced services, such as broadband. For example, many rural carriers with whom we spoke have or are replacing their copper wire with fiber optic cable. One carrier was in the early phases of a 7-year, $1.8 million expansion to install fiber-to-the-home for each customer in one of its exchanges that served about 700 lines. In addition to transitioning from copper to fiber, carriers are investing in modern switching equipment and remote terminals to improve the connection speeds available to their customers. For example, one carrier told us that it is currently installing more remote terminals to provide higher-speed broadband service, and that it currently has 96 remote terminals to serve customers spread out across a 1,300 square mile area. The availability of high-cost program support can, in part, determine whether deployment of broadband service is feasible in a rural area. In rural areas served by rural carriers, the high-cost program allows the carrier to recoup a large portion of the investment that facilitates broadband service since, as we mentioned earlier, these carriers receive high-cost program support based on their costs. Alternatively, in rural areas served by nonrural carriers, which generally do not receive as much funding as rural carriers and do not receive funding based on their costs, the network upgrades necessary for broadband service are less likely. As a result, the availability of broadband services to rural customers is largely determined by the type of carrier they are served by, and not where they are located. Rural carriers. Most rural carriers with whom we spoke had or were deploying advanced network features, such as fiber optic cable. For example, of the rural carriers we spoke with during our site visits, many stated they were able to provide broadband Internet service to 100 percent of their customers or service areas. In addition, FCC estimates that in 2007, 82 percent of households served by rural incumbent carriers had access to high-speed broadband connections. Nonrural carriers. Nonrural carriers with whom we spoke reported that they have broadband-enabled equipment in most or all facilities, but these carriers are generally unable to provide all rural customers with broadband service. Rather, only those customers residing relatively close to the carrier’s facility can receive broadband service. These carriers indicated that deploying broadband service to a wider service territory was not economical given the diffuse population in rural areas. Another impact of the high-cost program has been the increase in competitive carriers, especially wireless carriers, in rural areas. Beginning in 1997, FCC adopted a series of measures intended to encourage competition between carriers in rural areas to promote the principles of universal service. Among the actions taken by FCC was adopting the principle of “competitive neutrality” as part of the high-cost program. Under this principle, one carrier should not be favored over another carrier, and support should be available to any carrier that meets the requirements for operating as an ETC, regardless of the type of technology the carrier employs (such as wireline, wireless, or satellite). This principle was supported by granting high-cost program support to CETCs. While incumbents—both rural and nonrural—receive support based on their costs of providing service in an area or FCC’s cost model, CETCs receive support based on the number of lines they serve through a mechanism known as the identical support rule. Under this rule, CETCs in an area receive the same level of high-cost program support, on a per-line basis, as the incumbent carrier in that area. For example, if the incumbent carrier receives support that, based on the number of lines it serves, results in $20 of support per line, every competitor designated as an ETC in that area also will receive $20 in support for each line it serves in the same area regardless of its costs. ETC status is not required for a competitive carrier to operate in an area, but it is required if the carrier wants to receive high-cost program funding. As a result, the number of carriers seeking CETC status has increased dramatically, and the majority of newly designated carriers are wireless. Since 1998, the number of CETCs receiving support through the high-cost program has risen from a total of 2 carriers in 1998, to 362 competitive carriers in 2007. Of these 362 CETCs, 260 carriers—over 70 percent—are wireless carriers. Along with an increase in the number of carriers, the amount of funding provided to CETCs has increased over this time period, growing from $535,104 in 1999, with 100 percent going to wireless carriers, to $1.2 billion by 2007, with 98 percent of all CETC funding going to wireless carriers. A recent report by FCC estimated that at the end of 2006, wireless carriers had achieved an 80 percent penetration rate across the country, and according to the wireless carriers with whom we spoke, high-cost program support has allowed them to invest in improving and expanding their networks in rural areas where they would otherwise be unable to economically justify the investment. For example, one carrier told us that it can cost from $350,000 to $500,000 to install a cell tower in rugged or mountainous terrain, in addition to other installation expenses such as land rent and maintenance costs, but that in most cases, low population density in the area would not yield enough customers to recover the investment. Additionally, wireless companies and regulators with whom we spoke stated that the availability of wireless communication is a public safety concern; travelers along rural highways expect to be able to use cell phones in the event of an emergency. However, wireless carriers often lack the economic incentive to install cell phone towers in rural areas where they are unlikely to recover the installation and maintenance costs, but high-cost program support allows them to make these investments. In the 1996 Act, the Congress established the principles underlying universal service, which provide a clear purpose for the high-cost program. However, since 1998, FCC has distributed over $30 billion in high-cost funding without developing specific performance goals for the program. Additionally, FCC has not developed outcome-based performance measures for the program. While FCC has begun preliminary efforts to address these shortcomings, its efforts do not align with practices GAO and OMB have identified as useful in developing successful performance goals and measures. In the absence of program goals and data pertaining to the program’s performance, the Congress and FCC may be limited in their ability to make informed decisions about the future of the program. In the 1996 Act, the Congress clearly established the principles underlying universal service. In particular, the Congress said that “quality services should be available at just, reasonable, and affordable rates.” Additionally, the Congress said that consumers in all regions of the country, including “those in rural, insular, and high-cost areas” should have access to telecommunications and information services that are “reasonably comparable to those services provided in urban areas and that are available at rates that are reasonably comparable to rates charged for similar services in urban areas.” These guiding principles provide a clear purpose for the high-cost program. However, 12 years after the passage of the 1996 Act and after distributing over $30 billion in high-cost program support, FCC has yet to develop specific performance goals and measures for the program. We were unable to identify any performance goals or measures for the high-cost program. In its 2005 program assessment of the high-cost program, OMB also concluded that the program did not have performance goals or measures. OMB reported that the program neither measures the impact of funds on telephone subscribership in rural areas or other potential measures of program success, nor bases funding decisions on measurable benefits. OMB also reported that the high-cost program does not have specific, long-term performance measures that focus on outcomes and meaningfully reflect the purpose of the program. Additionally, in February 2005, we reported that FCC had not established performance goals and measures for the E-Rate program, the second-largest universal service program. At that time, we observed that under the Government Performance and Results Act, FCC was responsible for establishing goals for the universal service programs, despite the fact that the 1996 Act did not specifically require them. Further, FCC has not adequately defined the key terms of the high-cost program’s purpose. For example, the Congress directed FCC to ensure consumers in rural areas received access to “reasonably comparable” services at “reasonably comparable” rates to those in urban America. To address this, in a report and order issued in 1999, FCC defined “reasonably comparable” as “a fair range of urban/rural rates both within a state’s borders, and among states nationwide.” This definition only focused on rates and did not address what FCC considered “reasonably comparable” services; 2 years after FCC issued this definition, its adequacy was challenged in federal court. In July 2001, the Tenth Circuit Court rejected FCC’s use of this definition, and required that FCC more precisely define “reasonably comparable” in reference to rates charged in rural and urban areas.” Subsequently, in October 2003, FCC attempted to again define “reasonably comparable,” this time stating that rates are considered reasonably comparable if they fall within two standard deviations of the national urban average. Again, this definition did not address what services should be supported by the high-cost program, and again in February 2005, the Tenth Circuit Court rejected the adequacy of this definition, stating that it was not clear how this new definition preserved and advanced universal service. To date, FCC has not adopted any other definitions for “reasonably comparable” rates or services. In June 2005, FCC issued a Notice of Proposed Rulemaking, in which it sought comment on establishing useful outcome, output, and efficiency measures for each of the universal service programs, including the high- cost program. In this notice, FCC recognized that clearly articulated goals and reliable performance data would allow for assessment of the effectiveness of the high-cost program and would allow FCC to determine whether changes to the program are needed. As of August 2007, FCC had not established performance goals and measures for the high-cost program, and FCC stated it did not have sufficient data available to establish high-cost program performance goals. To begin addressing this shortcoming, FCC started collecting performance data from USAC on a quarterly basis, including: number of program beneficiaries (i.e. ETCs) per study area and per wire number of lines, per study area and per wire center, for each ETC; number of requests for support payments; average (mean) dollar amount of support and median dollar amount of support for each line for high-cost ETCs; total amount disbursed—aggregate and for each ETC; time to process 50 percent, 75 percent, and 100 percent of the high-cost support requests and authorize disbursements; and rate of telephone subscribership in urban vs. rural areas. However, these efforts generally do not align with known practices for developing performance goals. We have reported that in developing performance goals, an agency’s efforts should focus on the results it expects its programs to achieve, that is, the differences the program will make in people’s lives. In doing this, an agency’s efforts should work to strike difficult balances among program priorities that reflect competing demands and provide congressional and other decision makers with an indication of the incremental progress the agency expects to make in achieving results. Additionally, we have identified many useful practices for developing program goals and measures; these practices include developing goals and measures that address important dimensions of program performance, developing intermediate goals and measures, and developing goals to address mission-critical management problems. As seen in table 4, we found that FCC’s efforts do not align with useful practices we have identified for developing successful goals. Additionally, FCC’s efforts do not align with guidance set forth by OMB. According to OMB, output measures describe the level of a program’s activity, whereas outcome measures describe the intended result from carrying out a program or activity and efficiency measures capture a program’s ability to perform its function and achieve its intended results. In its Program Assessment Rating Tool Guidance, OMB noted that measures should reflect desired outcomes. Yet, FCC’s data collection efforts focus on program outputs, and not program outcome or efficiency. Therefore, FCC’s efforts will be of limited use in illustrating the impacts of the high-cost program or how efficiently the program is operating. Clearly articulated performance goals and measures are important to help ensure the high-cost program meets the guiding principles set forth by the Congress. These guiding principles include comparable rates and services for consumers in all regions of the country. Yet, as mentioned earlier, the program’s structure has contributed to inconsistent distribution of support and availability of telecommunications services across rural America, which is not consistent with these guiding principles. Outcome-based performance goals and measures will help illustrate to what extent, if any, the program’s structure is fulfilling the guiding principles set forth by the Congress. Finally, FCC is reviewing several recommendations and proposals to restructure the high-cost program. Yet, because there is limited information available on what the program in its current form is intended to accomplish, what it is accomplishing, and how well it is doing so, it remains unclear how FCC will be able to make informed decisions about which option is best for the future. Internal controls mechanisms for the high-cost program focus on three areas. Yet, each area has weaknesses. The carrier certification process exhibits inconsistency across states and carriers, the carrier audits have been limited in number and types of reported findings, and carrier data validation focuses primarily on completeness and not accuracy. Collectively, these weaknesses hinder FCC’s ability to understand the risks associated with noncompliance with program rules. Further, these weaknesses could contribute to excessive program expenditures. In particular, the high-cost program could incur excessive expenditures because of carrier inefficiencies, excessive payments to carriers, and provision of funding for nonsupported services. Internal control mechanisms for the high-cost program focus on three areas: (1) carrier certification, (2) carrier audits, and (3) carrier data validation processes. In each of these three areas, we found weaknesses in the internal control mechanisms. Annual certification is the primary tool used to enforce carrier accountability for use of high-cost program support, yet the certification process does not have standardized requirements. FCC requires that all states annually certify that all federal high-cost program support provided to eligible carriers in their state will be used only for the provision, maintenance, and upgrading of facilities and services for which the support is intended. It is up to the states to determine if carriers are operating in accordance with these guidelines. Generally, states do so by collecting information from carriers regarding their use of high-cost program funds. However, states have different requirements for what information carriers must submit. Additionally, if a state does not have jurisdiction over a carrier, then the carrier provides annual certification data directly to FCC. As a result, carriers are subject to different levels of oversight and documentation requirements to demonstrate that high-cost program support was used appropriately. FCC established requirements for information that must be submitted by the carriers it designates as ETCs. Incumbent carriers designated as ETCs by FCC must provide a sworn affidavit stating they are using high-cost program funding only for the provision, maintenance, and upgrading of facilities and services for which the support is intended. Additionally, all carriers—incumbent and competitive—designated as ETCs by FCC must provide: progress reports on the ETC’s 5-year service quality improvement plan; the number of unfulfilled service requests from potential customers; the number of complaints per 1,000 handsets or lines; certification that the ETC is complying with applicable service quality standards and consumer protection rules; certification that the ETC is able to function in emergency situations; certification that the ETC is offering a local usage plan comparable to that offered by the incumbent in the relevant service areas; and certification that the carrier acknowledges that FCC may require it to provide equal access to long-distance carriers in the event that no other eligible telecommunications carrier is providing equal access within the service area. While FCC encourages state regulatory commissions to adopt these requirements, it is not mandatory. Nevertheless, in our survey of state regulatory commissions, we found that many states require carriers to provide information similar to some of the information collected by FCC, particularly with respect to quality-of-service data. For example, FCC requires carriers to submit annual information on the number of unfulfilled service requests from potential customers, the number of complaints per 1,000 handsets or lines, as well as detailed information on outages the carrier experienced. According to our survey, we found that many states require similar information from carriers. Additionally, in our survey, we asked the state regulatory commissions to specify the types of quality-of-service measures they require for incumbent, competitive wireline, and wireless carriers. We found that state requirements are somewhat varied across the three different types of carriers. For example, of the 45 states that indicated they measure consumer complaints, 22 states indicated they require this information for at least one type of carrier but not all types of carriers. (See table 5.) In addition to the quality-of-service information, we found that state regulatory commissions collect a variety of information pertaining directly to the annual certification process. States most frequently require carriers to submit affidavits that future support will be used for its intended purpose; plans for quality, coverage, or capacity improvements; and evidence that past support was used for its intended purposes. However, according to our survey, 10 state regulatory commissions require incumbent carriers to submit only an affidavit, with no additional information. Additionally, in some instances, these requirements vary based on the type of carrier. (See table 6.) Carrier audits are the primary tool used in monitoring and overseeing carrier activities, but these audits have been limited in number and types of reported findings. While the 1996 Act does not require audits, FCC has authorized USAC to conduct audits. FCC, USAC, and some state regulatory commissions conduct carrier audits for the high-cost program. USAC audits. USAC operates an audit program to determine if carriers are complying with the program’s rules. This audit program has been limited; according to USAC officials, since 2002, USAC has conducted about 17 audits, from more than 1,400 carriers participating in the high-cost program (approximately 1.2 percent coverage). USAC officials told us these audits are time-consuming, and have yielded limited findings because participants did not maintain adequate documentation to validate their information. This occurred for two reasons: (1) the high-cost program had no requirement that carriers retain documents and (2) rural carriers receive funding as a reimbursement for costs incurred 2 years prior to the receipt of support and therefore carriers did not keep records going back a sufficient period of time for the audit. To address these problems, in August 2007, FCC imposed and USAC implemented document retention rules for high-cost program participants; participants are now required to maintain records that can be used to demonstrate to auditors that support received was used consistent with the 1996 Act and FCC’s rules for 5 years. FCC audits. In 2006 and 2007, FCC’s Office of Inspector General (OIG) instructed USAC to begin conducting audits and assessments to determine the extent to which high-cost program beneficiaries were in compliance with program rules. These audits and assessments have two objectives: (1) validate the accuracy of carrier self-certifications—audits—and (2) provide a basis for identifying and estimating improper payments under the Improper Payments and Information Act of 2002 (IPIA)— assessments. To conduct these audits and assessments, a random sample of 65 out of about 1,400 carriers was selected by OIG. In meeting its first objective, findings were similar to USAC’s audit findings, in that it could not be determined if the information that carriers attested to in their annual certifications was accurate because carriers did not have proper documentation to validate their information. For the second objective, OIG reported that the high-cost program had an estimated 16.6 percent rate of improper payments. In response to these findings, USAC maintained that this error rate was primarily indicative of carrier noncompliance with program rules, and not a result of payments made to carriers for inaccurate amounts. For example, USAC stated that these payments were categorized as erroneous because the carrier failed to comply with high-cost program rules such as meeting filing deadlines or completing required documentation. In November 2007, a second round of beneficiary audits and assessments was begun to further review program compliance and should be completed by the end of calendar year 2008. State audits. In addition to the USAC and FCC OIG audits, 7 of the 50 state regulatory commissions that responded to our survey reported that they audit incumbent carriers. These audits focus on the appropriate use of high-cost program funding, the accuracy of carrier-reported costs, and the compliance with quality-of-service standards. Two of the 7 states reported that they audit all incumbent carriers, while the remaining 5 states reported that audits are based on a risk assessment of the carrier, or triggered by unusual behavior on the part of the carrier. While these 7 states conduct audits, states generally do not revoke carrier’s ETC status. According to our survey, since 2002, one state reported it had revoked a carrier’s ETC status (for a competitive carrier). Additionally, during our site visits, several state officials told us they did not conduct audits because they did not feel it was the state’s jurisdiction or they lack the resources to perform in-depth reviews of carriers’ use of high-cost program funds. Data validation processes to ensure the reliability of financial data primarily focus on the completeness of the data provided by carriers, but not the accuracy of the data. Incumbents submit cost and line count data directly to NECA and USAC; these cost and line count data are used to qualify carriers for and to calculate the amount of carriers’ high-cost program support. NECA is responsible only for collecting carrier cost and line count data for the high-cost loop support component of the high-cost program. All cost data NECA collects for this component are subject to several electronic validations which primarily focus on ensuring that all required data are reported and that the data ranges are consistent with information reported in previous years. In addition, NECA compares reported cost data with information provided in carriers’ audited financial statements to identify any discrepancies. According to NECA officials, these statements are available for about 90 percent of its member carriers. If inaccuracies are found, member carriers are required to provide NECA with an explanation to resolve the situation, but no action is taken against the carrier. NECA officials do not conduct any additional oversight of the line count data they receive. USAC collects cost and line count data for the remaining components of the high-cost program, and similarly to NECA, these data are subject to several electronic data validations for completeness. While these validations and reviews provide NECA and USAC with opportunities to identify input errors, they do not addresses whether or not the data provided by participants are accurate or if the money spent addresses the intended purposes of the high-cost program. While some internal control mechanisms are in place, the weaknesses we identified hinder FCC’s ability to assess the risk of noncompliance with program rules. In particular, the internal control mechanisms may not fully address the following concerns, which could contribute to excessive program expenditures. Cost-effectiveness. In some instances, carriers receive high-cost program support based on their costs. Historically, carriers often were subject to rate-of-return regulation, wherein a state regulatory commission would assess the carrier’s costs and investments to ensure these were appropriate and necessary. Of the 50 respondents to our survey of state regulatory commissions, 33 apply rate-of-return regulation to rural incumbent carriers, 13 apply it to nonrural incumbent carriers, and 4 apply it to competitive wireline carriers. Further, during our site visits, several state commissions told us that rate cases in which a state regulatory commission evaluates a carrier’s costs and investments are very infrequent, if they take place at all. As such, there is limited assessment of the cost-effectiveness of carriers and their investments. Further, as OMB noted, there is no evidence that the program explicitly encourages carriers to achieve efficient and cost-effective delivery of service; rather, the program simply makes rural incumbent carriers whole, regardless of their investment decisions or business model, or the presence of competition in the market by guaranteeing “reasonable” rates of return. The combination of a funding mechanism that does not encourage cost- effectiveness, combined with a lack of detailed oversight, may not yield the most cost-effective program expenditures. Accuracy of cost and line count data. ETCs and CETCs receive high-cost program support based on their costs and line counts. However, as mentioned above, FCC, USAC, and NECA data collection efforts generally focus on completeness and consistency of carriers’ data submissions, but not the accuracy of the data. Further, USAC, FCC, and state regulatory commissions audit a small fraction of program participants, and in the case of FCC’s IPIA audits, these audits do not assess the accuracy of cost and line count data which are used to form the basis for carrier support. Inaccuracies in cost and line count data, which are not uncovered through review, could facilitate excessive program expenditures. Appropriate use of high-cost program support. The high-cost program rules delineate the appropriate uses of the program’s support. As we discussed, carriers must annually certify that their use of high-cost program support complies with the program rules. However, the self- certification process varies based on who oversees the carrier; further, there is little follow-up to assess whether carriers’ actions are consistent with the certifications. As such, program administrators cannot fully assess whether carriers are appropriately using high-cost program support. Thus, program expenditures could prove excessive if high-cost program funding is used to support services not covered by the program (such as broadband). In the 1996 Act, the Congress said that consumers in “rural, insular, and high-cost areas” should have access to services and rates that are “reasonably comparable” to consumers in urban areas. To respond to this task, FCC modified and expanded the high-cost program. In the intervening 10 years, FCC has distributed over $30 billion to carriers, with much of this support coming from fees charged to consumers. Yet, FCC has not established performance goals or measures for the program. Thus, it is neither clear what outcomes the program is intended to produce nor what outcomes the program has achieved. What we and the Joint Board found were differences in telecommunications services in rural areas across the country. For example, in some rural areas, carriers receive generous support and provide advanced services, such as fiber-to-the- home, while in other rural areas, carriers receive little or no support and provide basic services. In addition to the lack of performance goals and measures, the internal control mechanisms in place have weaknesses, which hinder FCC’s ability to assess the risk of noncompliance with program rules and ensure cost-effective use of program funds. The internal control mechanisms are inconsistent, limited in number, and appear more concerned with data completeness than accuracy. Thus, for example, it is not clear that the program ensures the most cost-effective delivery of services to rural areas. Therefore, program expenditures may be higher than necessary. These problems raise concerns about past and current program expenditures. But, they also raise concerns about the future of the program. In January 2008, FCC issued several notices proposing fundamental, policy-oriented reform of the program. For example, FCC proposed reverse auctions for the program, but it is not clear how FCC can assess this proposal when it does not know what goals the program should achieve or how it will measure program outcomes. Additionally, the Joint Board proposed separate funds for broadband, mobility, and provider-of-last resort services, with a $4 billion funding level that was based on the current level of program expenditures. But, it is not clear that the $4 billion is the correct funding level. Without performance goals, measures, and adequate internal controls, it will be difficult for FCC to assess these proposals. Finally, failure to address these problems may undermine support for the program over time, as program expenditures continue to increase. To strengthen management and oversight of the high-cost program, we recommend that the Chairman, FCC take the following two actions: 1. To better ensure that the high-cost program supports the purpose it is intended to fill, FCC should first clearly define the specific long-term and short-term goals of the high-cost program and subsequently develop quantifiable measures that can be used by the Congress and FCC in determining the program’s success in meeting its goals. 2. To ensure a robust internal control environment that supports performance-based management, FCC should identify areas of risk in its internal control environment and implement mechanisms that will help ensure compliance with program rules and produce cost-effective use of program funds. We provided a draft of this report to FCC and USAC for their review and comment. FCC noted that it was aware of, and had addressed or planned to address, the shortcomings we identified in the report. However, FCC noted that it would issue a Notice of Inquiry to seek information on ways to further strengthen its management and oversight of the high-cost program. FCC and USAC both provided information to further clarify the actions that are currently underway and provided technical comments that we incorporated where appropriate. The written comments of FCC and USAC appear in appendices III and IV, respectively. In its comments, FCC reiterated the status of its existing efforts to strengthen the management and oversight of the high-cost program, as well as to restrain the growth in program expenditures. In particular, FCC cited the OIG audits, the new document retention requirements, and the Memorandum of Understanding (MOU) between FCC and USAC. We agree that these are important efforts, but they do not resolve the shortcomings we identified in the report. FCC also noted that we did not mention the MOU in the report; while we did not cite the MOU, we did incorporate elements of the MOU in the report, including, for example, the requirement that USAC collect and report performance data on a quarterly basis. In addition, FCC noted that it issued several NPRMs seeking comments on proposals to restrain the growth in program expenditures, including removal of the identical support rule and adoption of reverse auctions, and noted that we did not consider these reform proposals in our report. We did provide background information on these proposals; however, we did not provide an assessment of these proposals since FCC was actively seeking comments on the proposals and the outcome of the proposals was speculative; further, our findings and recommendations regarding management and oversight are applicable to the program in general, regardless of the specific reform proposal adopted. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees, the Chairman of the Federal Communications Commission, and the Chairman of the Universal Service Administrative Company. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and major contributors to this report are listed in appendix V. This report examines the operation of the high-cost program of the Universal Service Fund. In particular, the report provides information on (1) the effect that the structure of the program has on the levels of support and types of services available in high-cost areas; (2) the extent that the program has performance goals and measures; and (3) the extent that the program has mechanisms in place to prevent and detect fraud, waste, and abuse. To respond to the overall objectives of this report, we interviewed officials from the Federal Communications Commission (FCC) and the Universal Service Administrative Company (USAC). In addition, we reviewed FCC and USAC documents, as well as relevant legislation and federal regulations. We also interviewed industry associations, national wireline and wireless companies, the National Exchange Carrier Association (NECA), and other individuals with knowledge of the high-cost program. We reviewed USAC data on the distribution of high-cost funds across states and companies. Finally, we compared FCC, USAC, and state policies to GAO and Office of Management and Budget (OMB) guidance. Table 7 lists the individuals and organizations with whom we spoke. For the first and third objectives, we conducted site visits in six states: Alabama, Iowa, Montana, Oklahoma, Oregon, and Wisconsin. We used a multistep process to select these six states. First, we divided states into Census Bureau regions, excluding states where (1) no competitive eligible telecommunications carriers (CETC) received support in 2006 and (2) the urban population was equal to or above average, since the high-cost program provides support for rural areas. Second, we selected states within each region based on the number of eligible telecommunications carriers (ETC) and CETCs present in the state. Within each state, we interviewed the state regulatory commission (that is, the state agency responsible for regulating telephone service within the state), “rural” and “nonrural” ETCs, and CETCs. In some states, we also interviewed cost consultants, state industry associations, and wireless carriers. To test our structured interview and site selection methodology, we also conducted site visits in the following states: Arizona, Maine, Massachusetts, New Hampshire, and New Mexico. We interviewed similar state and industry officials in these five states. For the third objective, we conducted a survey of state regulatory commissions. The survey field period was from December 12, 2007, to February 8, 2008, and sought information pertaining to the state’s regulation of telephone service; the state’s internal control procedures for incumbent, competitive wireline, and competitive wireless carriers receiving high-cost support in the state; and the state’s high-cost program, if any. To help ensure that the survey questions were clear and understandable to respondents, and that we gathered the information we desired, we conducted pretests with relevant officials in Mississippi, North Carolina, Pennsylvania, Virginia, and Washington. The survey was available online to officials in the 50 states and the District of Columbia on a secure Web site. We received complete responses from 50 of the 51 commissions we surveyed, for an overall response rate of 98 percent. This report does not contain all the results from the survey. The survey and a more complete tabulation of the results can be viewed at GAO-08-662SP. We conducted this performance audit from July 2007 through June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Telephone Penetration Rate by State (Percentage of Households with Telephone Service) November 1983 (%) July 2007 (%) November 1983 (%) July 2007 (%) The following are GAO’s comments on the Federal Communications Commission’s letter dated May 16, 2008. 1. While we did not specifically cite the Memorandum of Understanding (MOU), we did discuss elements of this document. With respect to internal controls, the MOU states that USAC “shall implement a comprehensive audit program to ensure that USF monies are used for their intended purpose, to verify that all USF contributors make the appropriate contributions in accordance with the rules, and to detect and deter potential waste, fraud, and abuse. shall work under the oversight of the OIG in hiring contractors and auditing contractors . . .” While we acknowledge these recent efforts to conduct audits of USF (Universal Service Fund) program contributors and beneficiaries, as discussed in this report, we found that with respect to the high-cost program, these efforts have been limited and yielded limited results. For example, USAC has conducted about 17 audits, from more than 1,400 participating carriers, and participants did not maintain adequate information for the auditors to validate their information. The MOU also states that USAC “shall implement effective internal control over its operations, including the administration of the USF and compliance with applicable laws and regulations. will implement an internal control structure consistent with the standards and guidance contained in OMB Circular A-123, including the methodology for assessing, documenting, and reporting on internal controls. . .” During our review, USAC officials made us aware of the actions they are taking to develop a comprehensive, internal control framework for USAC’s internal operations, such as procedures to ensure that cash disbursements are consistent with funds due to participating carriers. While we encourage these efforts, USAC’s internal operations were not in the scope of our objectives, and these efforts do not address the issues we raised in the report—weak and limited internal control mechanisms specifically aimed at the high-cost program beneficiaries. 2. We are aware that efforts to promote universal service in rural areas pre-date the 1996 Act, and include background information on these efforts on pages 7-8 of the report. However, to avoid any confusion, we modified the report text to note that FCC modified and expanded the high-cost program after the Telecommunications Act of 1996. 3. Again, while we did not specifically cite the MOU, on page 28 of the report, we did discuss the performance data that FCC requires USAC to collect and report on a quarterly basis. Interestingly, in a previous meeting with USAC and in USAC’s written comments, USAC noted that it is not authorized to collect all the performance data required by FCC. In particular, of the seven categories of performance data, USAC said it was not authorized to collect some portions of three categories, including the number of program beneficiaries per wire center, the number of lines per wire center (except for the high-cost model component), and the rate of telephone subscribership in urban vs. rural areas. Thus, it is unclear how effective this data collection effort will be in developing performance goals and measures. 4. We changed the text to be consistent with the 1996 Act. 5. We did not assert that FCC had the authority to impose mandatory standards on state regulatory commissions, although, since the issue has not been adjudicated, it is unclear whether the statute prohibits FCC from imposing mandatory standards. Irrespective of FCC’s authority to impose mandatory standards, the inconsistent requirements imposed by the states represent a weakness in the high- cost program’s internal controls. As USAC noted in its written comments, “if states adopted similar requirements there would be more standardized requirements across all ETCs thereby enabling USAC to conduct more comprehensive audits to ensure ETCs are using High Cost funds for the purposes intended.” 6. We changed the text to be consistent with the 1996 Act. 7. On page 37 of this report, we discuss the steps NECA takes to verify the accuracy of the data that carriers submit, including analyzing trends and comparing cost data with information in carriers’ audited financial statements. Although, as we also discuss, NECA is responsible only for collecting carrier cost and line count data for the high-cost loop support component, representing only one of the five high-cost program components. Further, trend analysis does not necessarily ensure the accuracy of the underlying data. In its comments on this report, USAC, which also performs extensive trending validations, noted that “absent a full-scale audit, it is difficult to determine the level of accuracy of the information provided by the carriers.” 8. We acknowledge that FCC’s Office of Inspector General (OIG) Improper Payments and Information Act of 2002 (IPIA) audits included a component to determine carrier compliance with high-cost program rules. While FCC’s comments state these audits were designed to render an auditor’s opinion on the accuracy of carrier data, it was unclear to us that these audits specifically addressed accuracy of carrier cost and line counts. In the OIG’s Initial Statistical Analysis of Data from the 2006/2007 Compliance Audits, the reasons for carrier noncompliance focused on procedural noncompliance. These reports attributed inadequate document retention; inadequate auditee processes or policies and procedures; inadequate systems for collecting, reporting, or monitoring data; auditee weak internal controls; and auditee data entry error. These reports do not discuss the accuracy of carrier’s cost or line-count data. 9. We acknowledged the Notice of Proposed Rulemakings that FCC issued to address the growth in the high-cost program’s expenditures and provided background information on these proposals. However, we did not provide an assessment of these proposals since FCC was actively seeking comments on the proposals, and the outcome of the proposals was speculative. Further, our findings and recommendations regarding management and oversight of the high-cost program are applicable to the program in general, regardless of the specific reform proposal adopted. In making our recommendations, we are not implying that FCC should discontinue policy-oriented reform of the high-cost program; rather, these efforts are complementary. 10. On page 14 of the report, we provide a discussion of the factors contributing to the growth of high-cost program expenditures, and figure 1 provides a visual illustration of the growth in both incumbent and competitive support. Further, on page 24, we provide detailed information on the growth in the number of CETCs and the overall financial support provided to CETCs. In addition to the individual named above, Michael Clements, Assistant Director; Tida Barakat; Brandon Haller; Amanda Krause; Carla Lewis; Michael Meleady; Joshua Ormond; Donell Ries; Stan Stenerson; Mindi Weisenbloom; Crystal Wesco; and Elizabeth Wood made key contributions to this report.
In the Telecommunications Act of 1996 (1996 Act), the Congress said that consumers in "rural, insular, and high-cost areas" should have access to services and rates that are "reasonably comparable" to those in urban areas. To implement the 1996 Act, the Federal Communications Commission (FCC) modified and expanded the high-cost program. The program provides funding to some telecommunications carriers, facilitating lower telephone rates in rural areas. GAO was asked to review (1) the effect that the program structure has on the level of support and types of services in rural areas, (2) the extent to which FCC has developed performance goals and measures for the program, and (3) the extent to which FCC has implemented internal control mechanisms. GAO reviewed relevant documents; interviewed federal and state officials, industry participants, and experts; conducted 11 state site visits; and conducted a survey of state regulators, available online at GAO-08-662SP . The high-cost program's structure has resulted in the inconsistent distribution of support and availability of services across rural America. The program provides support to carriers in all states. However, small carriers receive more support than large carriers. As a result, carriers serving similar rural areas can receive different levels of support. Currently, the high-cost program provides support for the provision of basic telephone service, which is widely available and subscribed to in the nation. But, the program also indirectly supports broadband service, including high-speed Internet, in some rural areas, particularly those areas served by small carriers. The program provides support to both incumbents and competitors; as a result, it creates an incentive for competition to exist where it might not otherwise occur. There is a clearly established purpose for the high-cost program, but FCC has not established performance goals or measures. GAO was unable to identify performance goals or measures for the program. While FCC has begun preliminary efforts to address these shortcomings, the efforts do not align with practices that GAO has identified as useful for developing successful performance goals and measures. For example, FCC has not created performance goals and measures for intermediate and multiyear periods. In the absence of performance goals and measures, the Congress and FCC are limited in their ability to make informed decisions about the future of the high-cost program. While some internal control mechanisms exist for the high-cost program, these mechanisms are limited and exhibit weaknesses that hinder FCC's ability to assess the risk of noncompliance with program rules and ensure cost-effective use of program funds. Internal control mechanisms for the program consist of (1) carrier certification that funds will be used consistent with program rules, (2) carrier audits, and (3) carrier data validation. Yet, each mechanism has weaknesses. The carrier certification process exhibits inconsistency across the states that certify carriers, carrier audits have been limited in number and reported findings, and carrier data validation focuses primarily on completeness and not accuracy. These weaknesses could contribute to excessive program expenditures.
In accordance with scientific custom and/or statutory mandates, several offices within EPA have used peer review for many years to enhance the quality of science within the agency. In May 1991, the EPA Administrator established a panel of outside academicians to, among other things, enhance the stature of science at EPA and determine how the agency can best ensure that sound science is the foundation for the agency’s regulatory and decision-making processes. In March 1992, the expert panelrecommended that, among other things, EPA establish a uniform peer review process for all scientific and technical products used to support EPA’s guidance and regulations. In response, EPA issued a policy statement in January 1993 calling for peer review of the major scientific and technical work products used to support the agency’s rulemaking and other decisions. However, the Congress, GAO, and others subsequently raised concerns that the policy was not being consistently implemented throughout EPA. The congressional concern resulted in several proposed pieces of legislation that included prescriptive requirements for peer reviews. Subsequently, in June 1994 the EPA Administrator reaffirmed the central role of peer review in the agency’s efforts to ensure that its decisions rest on sound science and credible data by directing that the agency’s 1993 peer review policy be revised. The new policy retained the essence of the prior policy and was intended to expand and improve the use of peer review throughout EPA. Although the policy continued to emphasize that major scientific and technical products should normally be peer reviewed, it also recognized that statutory and court-ordered deadlines, resource constraints, and other constraints may limit or preclude the use of peer review. According to the Executive Director of the Science Policy Council, one of the most significant new features of the 1994 action was the Administrator’s directive to the agency’s Science Policy Council to organize and guide an agencywide program for implementing the policy. The policy and procedures emphasize that peer review is not the same thing as other mechanisms that EPA often uses to obtain the views of interested and affected parties and/or to build consensus among the regulated community. More specifically, EPA’s policy and procedures state that peer review is not peer input, which is advice or assistance from experts during the development of a product; stakeholders’ involvement, which is comments from those people or organizations (stakeholders) that have significant financial, political, or other interests in the outcome of a rulemaking or other decision by EPA; or public comment, which is comments obtained from the general public on a proposed rulemaking and may or may not include the comments of independent experts. While each of these activities serves a useful purpose, the policy and procedures point out that they are not a substitute for peer review. For example, as noted in EPA’s Standard Operating Procedures, public comments on a rulemaking do not necessarily solicit the same unbiased, expert views as are obtained through peer review. In order to accommodate the differences in EPA’s program and regional offices, the policy assigned responsibility to each program and regional office to develop standard operating procedures and to ensure their use. To help facilitate agencywide implementation, EPA’s Science Policy Council was assigned the responsibility of assisting the offices and regions in developing their procedures and identifying products that should be considered for peer review. The Council was also given the responsibility for overseeing the agencywide implementation of the policy by promoting consistent interpretation, assessing agencywide progress, and developing revisions to the policy, if warranted. However, EPA’s policy specifies that the Assistant and Regional Administrators for each office are ultimately responsible for implementing the policy, including developing operating procedures, identifying work products subject to peer review, determining the type and timing of such reviews, and documenting the process and outcome of each peer review conducted. Our objectives, scope, and methodology are fully described in appendix I. Two years after EPA established its peer review policy, implementation is still uneven. EPA acknowledges this problem and provided us with a number of examples to illustrate the uneven implementation. At our request, the Science Policy Council obtained information from EPA program and regional offices and provided us with examples in which, in their opinion, peer review was properly conducted; cases in which it was conducted but not fully in accordance with the policy; and cases in which peer review was not conducted at all. The following table briefly summarizes the cases they selected; additional information on these nine cases is provided in appendix II. According to the Executive Director of the Science Policy Council, this unevenness can be attributed to several factors. First, some offices within EPA have historically used peer review, while others’ experience is limited to the 2 years since the policy was issued. For example, in accordance with scientific custom, the Office of Research and Development (ORD) has used peer review for obtaining critical evaluations of certain work products for more than 20 years. Additionally, statutes require that certain work products developed by EPA be peer reviewed by legislatively established bodies. For example, criteria documents developed by ORD for the National Ambient Air Quality Standards must receive peer review from EPA’s Science Advisory Board (SAB), and pesticide documents must receive peer review from the Scientific Advisory Panel. In contrast, some EPA regional offices and areas within some EPA program offices have had little prior experience with peer review. In addition to these offices’ varying levels of experience with peer review, the Science Policy Council’s Executive Director and other EPA officials said that statutory and court-ordered deadlines, budget constraints, and difficulties associated with finding and obtaining the services of qualified, independent peer reviewers have also contributed to peer review not being consistently practiced agencywide. A report by the National Academy of Public Administration confirmed that EPA frequently faces court-ordered deadlines. According to the Academy, since 1993 the courts have issued an additional 131 deadlines that EPA must comply with or face judicial sanctions. Also, as explained to us by officials from EPA’s Office of Air and Radiation (OAR), just about everything EPA does in some program areas, such as Clean Air Act implementation, is to address either legislative or court-ordered mandates. Others have attributed EPA’s problems with implementing peer review in the decision-making process to other factors. For example, in its March 1995 interim report on EPA’s research and peer review program within the Office of Research and Development, the National Academy of Sciences’ National Research Council noted that, even in EPA’s research community, knowledge about peer review could be improved. The Council’s interim report pointed out that “although peer review is widely used and highly regarded, it is poorly understood by many, and it has come under serious study only in recent years.” Although we agree that the issues EPA and others have raised may warrant further consideration, we believe that EPA’s uneven implementation is primarily due to (1) confusion among agency staff and management about what peer review is, what its significance and benefits are, and when and how it should be conducted and (2) ineffective accountability and oversight mechanisms to ensure that all products are properly peer reviewed by program and regional offices. Although the policy and procedures provide substantial information about what peer review entails, we found that some EPA staff and managers had misperceptions about what peer review is, what its significance and benefits are, and when and how it should be conducted. For example, officials from EPA’s Office of Mobile Sources (OMS) told the House Commerce Committee in August 1995 that they had not had any version of the mobile model peer reviewed. Subsequently, in April 1996, OMS officials told us they recognize that external peer review is needed and that EPA plans to have the next iteration of the model peer reviewed. However, when asked how the peer review would be conducted, OMS officials said they plan to use the public comments on the revised model they receive as the peer review. As EPA’s policy makes clear, public comments are not the same as nor are they a substitute for peer review. We found a similar misunderstanding about what peer review entails in a regional office we visited. The region prepared a product that assesses the impacts of tributyl tin—a compound used since the 1960s in antifouling paints for boats and large ships. Although regional staff told us that this contractor-prepared product had been peer reviewed, we found that the reviews were not in accordance with EPA’s peer review policy. The draft product received some internal review by EPA staff and external review by contributing authors, stakeholders, and the public; however, it was not reviewed by experts previously uninvolved with the product’s development nor by those unaffected by its potential regulatory ramifications. When we pointed out that—according to EPA’s policy and the region’s own peer review procedures—these reviews are not a substitute for peer review, the project director said that she was not aware of these requirements. In two other cases we reviewed, there was misunderstanding about the components of a product that should be peer reviewed. For example, in the Great Waters study—an assessment of the impact of atmospheric pollutants in significant water bodies—the scientific data were subjected to external peer review, but the study’s conclusions that were based on these data were not. Similarly, in the reassessment of dioxin—a reexamination of the health risks posed by dioxin—the final chapter summarizing and characterizing dioxin’s risks was not as thoroughly peer reviewed. More than any other, this chapter indicated EPA’s conclusions based on its reassessment of the dioxin issue. In both cases, the project officers did not have these chapters peer reviewed because they believed that the development of conclusions is an inherently governmental function that should be performed exclusively by EPA staff. However, some EPA officials with expertise in conducting peer reviews disagreed, maintaining that it is important to have peer reviewers comment on whether or not EPA has properly interpreted the results of the underlying scientific and technical data. In addition to the uncertainty surrounding the peer review policy, we also noted problems with EPA’s accountability and oversight mechanisms. EPA’s current oversight mechanism primarily consists of a two-part reporting scheme: Each office and region annually lists (1) the candidate products nominated for peer review during the upcoming year and (2) the status of products previously nominated. If a candidate product is no longer scheduled for peer review, the list must note this and explain why peer review is no longer planned. Agency officials said this was the most extensive level of oversight to which all program and regional offices could agree when the peer review procedures were developed. Although this is an adequate oversight mechanism for tracking the status of previously nominated products, it does not provide upper-level managers with sufficient information to ensure that all products warranting peer review have been identified. This, when taken together with the misperceptions about what peer review is and with the deadlines and budget constraints that project officers often operate under, has meant that the peer review program to date has largely been one of self-identification, allowing some important work products to go unlisted. According to the Science Policy Council’s Executive Director, reviewing officials would be much better positioned to determine if the peer review policy and procedures are being properly and consistently implemented if, instead, EPA’s list contained all major products along with what peer review is planned and, if none, the reasons why not. The need for more comprehensive accountability and oversight mechanisms is especially important given the policy’s wide latitude in allowing peer review to be forgone in cases facing time and/or resource constraints. As explained by EPA’s Science Policy Council’s Executive Director, because so much of the work that EPA performs is in response to either statutory or court-ordered mandates and the agency frequently faces budget uncertainties or limitations, an office under pressure might argue for nearly any given product that peer review is a luxury the office cannot afford in the circumstances. However, as the Executive Director of EPA’s Science Advisory Board told us, not conducting peer review can sometimes be more costly to the agency in terms of time and resources. He told us of a recent rulemaking by the Office of Solid Waste concerning a new methodology for delisting hazardous wastes in which the program office’s failure to have the methodology appropriately peer reviewed resulted in important omissions, errors, and flawed approaches in the methodology, which will now take from 1 to 2 years to correct. The SAB also noted that further peer review of individual elements of the proposed methodology is essential before the scientific basis for this rulemaking can be established. EPA has recently taken a number of steps to improve the peer review process. Although these steps should prove helpful, they do not fully address the underlying problems discussed above. In June 1996, EPA’s Deputy Administrator directed the Science Policy Council’s Peer Review Advisory Group and ORD’s National Center for Environmental Research and Quality Assurance to develop an annual peer review self-assessment and verification process to be conducted by each office and region. The self-assessment will include information on each peer review completed during the prior year as well as feedback on the effectiveness of the overall process. The verification will consist of the signature of headquarters, laboratory, or regional directors to certify that the peer reviews were conducted in accordance with the agency’s policy and procedures. If the peer review did not fully conform to the policy, the division director or the line manager will explain significant variances and actions needed to limit future significant departures from the policy. The self-assessments and verifications will be submitted and reviewed by the Peer Review Advisory Group to aid in its oversight responsibilities. According to the Deputy Administrator, this expanded assessment and verification process will help build accountability and demonstrate EPA’s commitment to the independent review of the scientific analyses underlying the agency’s decisions to protect public health and the environment. These new accountability and oversight processes should take full effect in October 1996. ORD’s National Center for Environmental Research and Quality Assurance has also agreed to play an expanded assistance and oversight role in the peer review process. Although the details had not been completed, the Center’s Director told us that his staff will be available to assist others in conducting peer reviews and will try to anticipate and flag the problems that they observe. In addition, the Center recently developed an automated Peer Review Panelist Information System—a registry with information on identifying and contacting potential reviewers according to their areas of expertise. Although the system was designed to identify potential reviewers of applications for EPA grants, cooperative agreements, and fellowships, the Center’s Director stated that the registry (or similarly designed ones) could also be used to identify potential peer reviewers for EPA’s technical and scientific work products. Recognizing that confusion remains about what peer review entails, the Office of Water recently drafted additional guidance that further clarifies the need for, use of, and ways to conduct peer review. The Office has also asked the Water Environment Federation to examine its current peer review process and to provide recommendations on how to improve it. The Federation has identified the following areas of concern, among others, where the program should be improved: (1) the types of, levels of, and methodologies for peer review; (2) the sources and selection of reviewers; (3) the funding/resources for peer review; and (4) the follow-up to, and accountability for, peer review. Similarly, OAR’s Office of Mobile Sources proposed a Peer Review/Scientific Presence Team in March 1996 to help OMS personnel better understand the principles and definitions involved in the peer review process. In addition to promoting greater understanding, this team would also help identify products and plan for peer review, as well as facilitate and oversee the conduct of peer reviews for OMS’ scientific and technical work products. The Office of Solid Waste and Emergency Response recently formed a team to support the Administrator’s goal of sound science through peer review. The team was charged with strengthening the program office’s implementation of peer review by identifying ways to facilitate good peer review and addressing barriers to its successful use. In May 1996, the team developed an implementation plan with a series of recommendations that fall into the following broad categories: (1) strengthening early peer review planning; (2) improving the ability of the Assistant Administrator to manage peer review activities; (3) providing guidance and examples to support the staff’s implementation of peer review; and (4) developing mechanisms to facilitate the conduct of peer reviews. EPA’s Region 10 formed a Peer Review Group with the responsibility for overseeing the region’s reviews. In March 1996, the group had a meeting with the region’s senior management, where it was decided to later brief mid-level managers on the importance of peer review and their peer review responsibilities. Agreement was also reached to have each of the region’s offices appoint a peer review contact who will receive training from the Peer Review Group and be responsible for managing some peer reviews and for coordinating other major peer review projects. The above agencywide and office-specific efforts should help address the confusion about peer review and the accountability and oversight problems we identified. However, the efforts aimed at better informing staff about the benefits and use of peer review are not being done fully in all offices and would be more effective if done consistently throughout the agency. Similarly, the efforts aimed at improving the accountability and oversight of peer review fall short in that they do not ensure that each office and region has considered all relevant products for peer review and that the reasons are documented when products are not selected. Despite some progress, EPA’s implementation of its peer review policy remains uneven 2 years after it became effective. Confusion remains about what peer review entails and how it differs from the mechanisms that EPA uses to obtain the views of interested and affected parties. Furthermore, the agency’s accountability and oversight mechanism provides too much leeway for managers to opt out of conducting peer reviews without having to justify or document such decisions. The annual listing of only those products that have been selected for peer review has not enabled upper-level managers to see what products have not been nominated for peer review nor the reasons for their exclusion. A more useful tool would be to have the list contain all planned major products with detailed information about the managers’ decisions about peer review. For example, if peer review is planned, the list would contain—as the current procedures already require—information on the type and timing of it. More significantly, if the managers elect to not conduct peer review on individual products, the list would provide an explanation of why the products are not being nominated. This process would provide upper-level managers with the necessary information to determine whether or not all products have been appropriately considered for peer review. We acknowledge that there are other difficulties in properly conducting peer reviews. However, we believe that as EPA strengthens the implementation of its peer review policy and gains more widespread experience with the process, the agency will be better positioned to address these other issues. To enhance the quality and credibility of its decision-making through the more widespread and consistent implementation of its peer review policy, we recommend that the Administrator, EPA, do the following: Ensure that staff and managers are educated about the need for and benefits of peer review; the difference between peer review and other forms of comments, such as peer input, stakeholders’ involvement, and public comment; and their specific responsibilities in implementing the policy. Expand the current list of products nominated for peer review to include all major products, along with explanations of why individual products are not nominated for peer review. We provided copies of a draft of this report to the Administrator of EPA for review and comment. In responding to the draft, EPA officials stated that the report was clear, instructive, and fair. The officials also provided us with some technical and presentational comments that we have incorporated as appropriate. We conducted our review from February 1996 through August 1996 in accordance with generally accepted government auditing standards. A detailed discussion of our scope and methodology appears in appendix I. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will send copies to the Administrator of EPA and other interested parties. We will also make copies available to others upon request. Please call me at (202) 512-6111 if you or your staff have any questions. Major contributors to this report are listed in appendix III. The Chairmen of the Senate Small Business Committee; the Subcommittee on Clean Air, Wetlands, Private Property, and Nuclear Safety, Senate Committee on Environment and Public Works; and the Subcommittee on Energy Production and Regulation, Senate Committee on Energy and Natural Resources, asked us to assess the Environmental Protection Agency’s (EPA) (1) progress in implementing its peer review policy and (2) efforts to improve the peer review process. To assess the status of EPA’s implementation of its peer review policy, we reviewed relevant documents and discussed the agency’s use of peer review with officials from EPA’s Science Policy Council; Office of Air and Radiation (Washington, DC, Durham, NC, and Ann Arbor, MI); Office of Water; Office of Program Planning and Evaluation; Office of Solid Waste and Emergency Response; and Office of Prevention, Pesticides, and Toxic Substances (Washington, DC); Office of Research and Development (Washington, DC and Research Triangle Park, NC); and EPA Region 10 (Seattle, WA). We also interviewed and obtained documents from officials with the National Academy of Sciences; the Water Environment Federation; the National Environmental Policy Institute; and the Natural Resources Defense Council. We reviewed a selection of scientific and technical products to obtain examples of how EPA’s program and regional offices were implementing the peer review policy. We asked officials from EPA’s Science Policy Council and Science Advisory Board to identify products that, in their opinion, fell into the following categories: (1) those that fully complied with the policy; (2) those that received some level of peer review but did not fully comply with the policy; and (3) those that should have received but did not receive peer review. We then interviewed the officials responsible for the products to determine how decisions were made about the products’ peer review. To assess EPA’s efforts to improve the peer review process, we reviewed relevant documents and discussed the agency’s recent, ongoing, and planned improvements with officials from EPA’s Science Policy Council; Science Advisory Board; and the program and regional offices identified above. We conducted our review from February though August 1996 in accordance with generally accepted government auditing standards. At our request, the Science Policy Council obtained information from EPA program and regional offices and provided us with examples illustrating the current uneven implementation of EPA’s peer review policy. This list was further augmented by the Executive Director of the Science Advisory Board. Although these products are not necessarily a representative sample, the Executive Director of EPA’s Science Policy Council stated that these cases provide good illustrations of how the level of peer review within EPA remains uneven. We have grouped the cases below according to whether (1) EPA’s peer review policy was followed, (2) the policy was not fully followed, or (3) a peer review was not conducted but should have been. In January 1993, EPA Region 10 received a petition from a local environmental group to designate the Eastern Columbia Plateau Aquifer System as a “Sole-Source Aquifer” under the Safe Drinking Water Act. The technical work product was entitled Support Document for Sole Source Aquifer Designation of the Eastern Columbia Plateau Aquifer System. Under the act, EPA may make this designation if it determines that the aquifer is the principal or sole source for the area’s drinking water. Once so designated, EPA would then review federally assisted projects in the area to determine if these activities could contaminate the aquifer. In August 1994, EPA prepared a draft document that presented the technical basis for the designation. Technical questions were raised by commentors that prompted EPA to convene a panel of experts to review the document. The panel was given a list of specific technical issues to address, the draft document, and the supporting materials. The peer review panel convened July 26-27, 1995, to discuss their views. The peer reviewers were chosen by asking several “stakeholder” organizations, including local governments, an environmental organization, and the United States Geological Survey, to nominate respected scientists with expertise in areas such as hydrogeology. From more than 15 nominees, a selection committee of EPA staff from outside Region 10 chose 6 peer review panel members. Although one stakeholder group expressed dissatisfaction that their candidate was not chosen for the panel, they eventually agreed that the panel fairly and objectively reviewed the support document. In July 1995, EPA received the peer review panel’s report and is still in the process of responding to the panel’s comments and those received from the public. Waste Technologies Industries (WTI) began limited operation of a hazardous waste incinerator in East Liverpool, Ohio, in April 1993. Although permitted for operation under the Clean Air Act, the Clean Water Act, and the Resource Conservation and Recovery Act, the facility became the focus of national attention and controversy due to several concerns. For example, it was being built near populated areas and an elementary school, and the public was skeptical about industries’ management of commercial incinerators, the ability of government agencies to regulate them, and whether the existing laws and regulations are sufficient to protect public health and the environment. The WTI site was chosen, in part, because of its proximity to steel mills, chemical plants, and other industries generating hazardous waste suitable for incineration. When fully operational, this site will incinerate over 100,000 tons of hazardous wastes annually. The original permit for WTI had been based solely on the modeled effects of direct inhalation exposures and had not included other exposure scenarios, such as indirect exposure through the food chain. Because of such risk assessment omissions and the controversy associated with the facility, EPA decided to conduct an on-site risk assessment of the cumulative human health and ecological risks associated with the operations of this facility, as well as such risks from accidents at the facility, and to publish its findings prior to the full operation of the WTI site. According to the Senior Science Advisor for the Office of Solid Waste and Emergency Response, peer review was envisioned early in the process and occurred at several stages, including peer review of the agency’s approach to addressing these issues and peer review of the entire report, including the conclusions and recommendations. She also said that about $120,000, or nearly 20 percent of all extramural funds that EPA spent on this over 3-year effort, went to cover peer review costs. EPA began to assess the risks of dioxin in the early 1980s, resulting in a 1985 risk assessment that classified the chemical as a probable human carcinogen, primarily on the basis of animal studies available at that time. The implications of additional advances in the early 1990s were uncertain: some maintained that dioxin’s risks were not as great as earlier believed, while others made the opposite argument. Given the growing controversy, in April 1991 EPA decided to work closely with the broader scientific community to reassess the full range of dioxin risks. The draft product, which was released for public comment in September 1994, contained an exposure document and a health effects document. The last chapter of the health effects document characterized the risks posed from dioxin by integrating the findings of the other chapters. “The importance of this . . . demands that the highest standards of peer review extend to the risk characterization itself. Although it can be argued that this is in fact being carried out by this SAB Committee, submitting the risk characterization chapter for external peer review prior to final review by the SAB would serve to strengthen the document, and assure a greater likelihood of its acceptance by the scientific community-at-large. It is recommended strongly that: a) the risk characterization chapter undergo major revision; and b) the revised document be peer reviewed by a group of preeminent scientists, including some researchers from outside the dioxin “community” before returning to the SAB.” Members of Congress also criticized EPA’s risk characterization document and its lack of peer review. In the House and Senate reports on the fiscal year 1996 appropriations bill for EPA, concerns were raised that the draft document “does not accurately reflect the science on exposures to dioxins and their potential health effects . . . EPA selected and presented scientific data and interpretations . . . dependent upon assumptions and hypotheses that deserve careful scrutiny . . . and inaccuracies and omissions . . . were the result of the Agency’s failure to consult with and utilize the assistance of the outside scientific community . . .” The committees directed EPA to respond to the SAB’s concerns and consult with scientists in other agencies in rewriting the risk characterization chapter. The House committee also restricted EPA from developing any new rules that raise or lower dioxin limits on the basis of the risk reassessment. As of July 1996, EPA was in the process of responding to the committees’, SAB’s, and the public’s comments. The risk characterization chapter is being subjected to a major revision and will be peer reviewed by external scientific experts prior to referral back to the SAB. The SAB will then be asked to evaluate EPA’s response to their suggestions and the adequacy of the additional peer review conducted on the draft report. Section 112(m) of the Clean Air Act Amendments of 1990 required EPA to determine if atmospheric inputs of pollutants into the Great Waters warrants further reductions of atmospheric releases and to report the agency’s findings to the Congress 3 years after the act’s enactment. The Great Waters program includes the Great Lakes, Lake Champlain, Chesapeake Bay, and the coastal waters. EPA made its first report to the Congress in May 1994. The scientific and technical data in this report, Deposition of Air Pollutants to the Great Waters: First Report to Congress, were peer reviewed by 63 reviewers. The reviewers represented a number of different perspectives, including academia, industry, environmental groups, EPA offices, other federal and state agencies, and Canadian entities. According to the Great Waters Program Coordinator, the reviewers were given copies of all the report chapters, except the conclusions and recommendation chapter, so that they could prepare for a peer review workshop. The reviewers then met to discuss the report and provide EPA with their views. EPA expended a great deal of effort to ensure that the science in the report was peer reviewed; however, the program coordinator said the agency did not have the conclusions and recommendations chapter peer reviewed. The decision not to peer review this chapter was based on the belief by those directing the program that these were the agency’s opinions based on the information presented and thus an inherently governmental function not subject to peer review. However, others within EPA believe that nothing should be withheld from peer review and said that the conclusions should have been peer reviewed to ensure that they were indeed consistent with the scientific content. Residential unit pricing programs involve charging households according to the amount, or number of units, of garbage that they produce. In accordance with the principle that the polluter pays, unit pricing provides a financial incentive for reducing municipal waste generation and enhancing recycling. EPA’s Office of Policy, Planning and Evaluation (OPPE) used a cooperative agreement to have an assessment prepared of the most significant literature on unit pricing programs to determine the degree to which unit pricing programs meet their stated goals. The paper, which was completed in March 1996, highlights those areas where analysts generally agree on the outcomes associated with unit pricing, as well as those areas where substantial controversy remains. Unit pricing is still voluntary in the United States, according to the project officer; however, he said EPA believes that the more information that municipalities have readily available as they make long-term solid waste landfill decisions, the more likely these local governments are to employ some form of unit pricing as a disincentive to the continued unrestrained filling of landfills. The OPPE project director had the report internally peer reviewed by three EPA staff knowledgeable about unit pricing. The report was not externally peer reviewed, he said, because it is designed to be used only as a reference guide by communities that are considering implementing some type of unit pricing program to reduce waste, and because EPA does not intend to use the report to support any regulatory actions. The Alaska Juneau (AJ) Gold Mine project was a proposal by the Echo Bay, Alaska, company to reopen the former mine near Juneau. The proposal entailed mining approximately 22,500 tons of ore per day and, after crushing and grinding the ore, recovering gold through the froth flotation and carbon-in-leach (also called cyanide leach) processes. After the destruction of residual cyanide, the mine tailings would be discharged in a slurry form to an impoundment that would be created in Sheep Creek Valley, four miles south of downtown Juneau. An environmental impact statement was prepared on the proposal in 1992. Because the project would require permits for fill materials and discharging wastewater into surface waters, EPA’s regional staff developed a model to predict the environmental ramifications of the proposal. According to regional staff, a careful analysis of the proposal was important because the issues in this proposal could potentially set a precedent for similar future proposals. EPA went through three iterations of the model. The first model was presented in a report entitled A Simple Model for Metals in the Proposed AJ Mine Tailings Pond. The report was reviewed by an engineer in EPA’s Environmental Research Laboratory and a firm that worked for the City and Borough of Juneau. The second model was a customized version of one developed by EPA’s Research Laboratory. After receiving comments from the firm representing Echo Bay, ORD laboratories, the Corps of Engineers, and others, EPA decided to also use another model to evaluate the proposal’s potential environmental effects. In 1994, EPA prepared a technical analysis report on the proposal. The report received peer review by several of the same individuals who commented on the models, as well as others. Although the reviewers had expertise in the subject matter, several were not independent of the product’s development or its regulatory and/or financial ramifications. Based partially on the model’s predictions, it became evident that EPA would withhold permit approval for the project. Accordingly, Echo Bay developed an alternative design for its project. In May 1995, EPA hired a contractor to prepare a supplemental environmental impact statement that will assess the revised project’s ecological effects. The agency plans to have the impact statement peer reviewed. Under the Resource Conservation and Recovery Act (RCRA), EPA is not only responsible for controlling hazardous wastes but also for establishing procedures for determining when hazardous wastes are no longer a health and/or ecological concern. As such, EPA’s Office of Solid Waste (OSW) developed a new methodology for establishing the conditions under which wastes listed as hazardous may be delisted. This methodology was presented in an OSW report, Development of Human Health Based and Ecologically Based Exit Criteria for the Hazardous Waste Identification Project (March 3, 1995), which was intended to support the Hazardous Waste Identification Rule. The intent of this rule is to establish human health-based and ecologically based waste constituent concentrations—known as exit criteria—for constituents in wastes below which listed hazardous wastes would be reclassified and become delisted as a hazardous waste. Such wastes could then be handled as a nonhazardous solid waste under other provisions of RCRA. OSW’s support document describes a proposed methodology for calculating the exit concentrations of 192 chemicals for humans and about 50 chemicals of ecological concern for five types of hazardous waste sources; numerous release, transport, and exposure pathways; and for biological effects information. “The Subcommittee is seriously concerned about the level of scientific input and the degree of professional judgment that, to date, have been incorporated into the methodology development. It was clear to the Subcommittee that there has been inadequate attention given to the state-of-the-science for human and ecological risk assessment that exists within EPA, let alone in the broader scientific community, in the development of the overall methodology, the identification of individual equations and associated parameters, the selection of models and their applicability, and the continual need for sound scientific judgment.” The SAB also noted that further peer review of individual elements of the proposed methodology is essential before the scientific basis can be established. The SAB concluded that the methodology at present lacks the scientific defensibility for its intended regulatory use. According to SAB’s Executive Director, this is a case where the program office’s decision to not conduct a peer review of the key supporting elements of a larger project resulted in extra cost and time to the agency, as well as missed deadlines. He pointed out that the experience on this one effort had now, he believed, caused a cultural change in the Office of Solid Waste, to the extent that they now plan to have peer consultation with the SAB on several upcoming lines of effort. Mobile 5A, also known as the mobile source emissions factor model, is a computer program that estimates the emissions of hydrocarbons, carbon monoxide, and nitrogen oxide for eight different types of gasoline-fueled and diesel highway motor vehicles. The first mobile model, made available for use in 1978, provided emissions estimates only for tailpipe exhaust emissions from passenger cars. Since that time, major updates and improvements to the mobile model have resulted in the addition of emissions estimates for evaporative (nontailpipe exhaust) emissions and for uncorrected in-use deterioration due to tampering or poor maintenance, according to the OMS Emission Inventory Group Manager. Also, other categories of vehicles, such as light-duty trucks and motorcycles, have been added over the years, she said. The development of the next generation model, Mobile 6, is currently under way. As with other models, the mobile model exists because precise information about the emissions behavior of the approximately 200 million vehicles in use in the United States is not known, according to the Group Manager. The primary use of the mobile model is in calculating the estimated emissions reductions benefits of various actions when applied to the mobile sources in an area. For example, the mobile model can estimate the impact of participating in a reformulated gasoline program, or of using oxygenated fuels in an area, or of requiring periodic inspection and maintenance of selected vehicle categories. In essence, the mobile model is one of the primary tools that EPA, states, and localities use to measure the estimated emissions reduction effectiveness of the pollution control activities called for in State Implementation Plans. None of the previous mobile models has been peer reviewed. However, EPA has obtained external views on the model through stakeholders’ workshops and experts’ meetings; one of the largest of these meetings involved over 200 stakeholders, according to OMS officials. The agency recognizes that these workshops and meetings are not a substitute for peer review and, in a reversal of the agency’s views of 10 months ago, EPA now plans to have Mobile 6 peer reviewed, they said. Several constraints, such as the limited number of unbiased experts available to do peer review in some fields and the resources for compensating reviewers, still have to be overcome, they added. Tributyl tin (TBT) is a compound used since the 1960s as an antifouling ingredient for marine paints. In the 1970s, antifouling paints were found to adversely affect the environment. Although restrictions were placed on TBT by the United States and a number of other countries in the 1980s, elevated levels of TBT continue to be found in marine ecosystems. In light of the uncertain human health and environmental effects of TBT, an interagency group consisting of EPA Region 10 officials, the Washington State Departments of Ecology and Natural Resources, the National Oceanographic and Atmospheric Administration, the U.S. Army Corps of Engineers, and others was formed to derive a marine/estuarine sediment effects-based cleanup level (or screening level) for TBT. In April 1996, a contractor-prepared report was issued with recommended screening levels; EPA regional staff served as the project managers and made significant contributions to the revisions to and final production of the report. Although an EPA project manager maintains that the report was peer reviewed, the reviews did not meet the requirements of EPA’s peer review policy nor the region’s standard operating procedures for conducting peer reviews. While the report was reviewed by members of the interagency group, other experts who provided input to the report, the affected regulated community, and the general public, there was not an independent review by experts not associated with preparing the report or by those without a stake in its conclusions and recommendations. When we explained to the project manager why EPA’s Science Policy Council characterized the report as not having received peer review, the project manager acknowledged that she was not familiar with either EPA’s peer review policy or the region’s standard operating procedures. EPA is currently in the process of responding to the comments it has received. James R. Beusse, Senior Evaluator Philip L. Bartholomew, Staff Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Environmental Protection Agency's (EPA): (1) progress in implementing its peer review policy; and (2) efforts to improve the peer review process. GAO found that: (1) although EPA has made progress in implementing its peer review policy, after nearly 2 years, implementation remains uneven; (2) while GAO found cases in which the peer review policy was followed, GAO also found cases in which important aspects of the policy were not followed or peer review was not conducted at all; (3) two primary reasons for this unevenness are: (a) confusion among agency staff and management about what peer review is, what its significance and benefits are, and how and when it should be conducted; and (b) inadequate accountability and oversight mechanisms to ensure that all relevant products are properly peer reviewed; (4) EPA officials readily acknowledged this uneven implementation and identified several of the agency's efforts to improve the peer review process; (5) because of concern about the effectiveness of the existing accountability and oversight mechanisms for ensuring proper peer review, EPA's Deputy Administrator recently established procedures to help build accountability and demonstrate EPA's commitment to the independent review of the scientific analyses underlying the agency's decisions; (6) these efforts are steps in the right direction; however, educating all staff about the merits of and procedures for conducting peer review would increase the likelihood that peer review is properly implemented agencywide; and (7) furthermore, by ensuring that all relevant products have been considered for peer review and that the reasons for those not selected have been documented, EPA's upper-level managers will have the necessary information to ensure that the policy is properly implemented.
Presidential Executive Order 12764, “Principles of Ethical Conduct for Government Officers and Employees” (government code of ethics), provides ethical guidelines to be followed in the executive branch of the federal government. Among the ethical standards prescribed in the order is that “Employees shall satisfy in good faith their obligations as citizens, including all just financial obligations, especially those such as Federal, State, or local taxes that are imposed by law.” The executive order, which was recently emphasized by the current administration in January 2001, continues to stress the ethical importance of federal workers’ complying with their federal tax obligations. Noncompliance by federal workers and annuitants could adversely affect the public’s perception of tax administration, government effectiveness, and the federal workforce. If the general public perceives that federal workers and annuitants can successfully evade their tax obligations, voluntary compliance, the foundation of the U.S. tax system, could be eroded. In 1992, IRS initiated FERDI, a program to identify the degree of compliance with federal tax laws among federal workers and federal annuitants. IRS began this program as a means to improve information on potential levy sources and in response to the presidential executive order. Beginning in 1992, IRS began to periodically match its records of outstanding taxes and nonfiled tax returns against federal personnel records to identify federal workers and annuitants who either had outstanding taxes or had not filed their tax returns. IRS entered into agreements with the Defense Manpower Data Center, which receives personnel data files on many of the government’s active and retired civilian and military workers, and the U.S. Postal Service, which maintains and processes similar data for postal workers, to match these personnel records against a data file of outstanding taxes and unfiled tax returns monthly. Most agencies, accounting for over 95 percent of the federal workforce, participate in this matching process. For those federal agencies and entities that do not, including the National Security Agency, the Federal Bureau of Investigation, the Central Intelligence Agency, the Board of Governors of the Federal Reserve System, and legislative branch entities, IRS attempts to identify these employees through a separate matching of Wage and Earnings Statements (W-2s). However, this process has certain limitations. Agencies that participate in the matching process and agencies where IRS is able to perform a match using W-2 information annually receive a letter from IRS informing them of the number of employees with outstanding taxes or unfiled tax returns. These letters also contain IRS’ assessment of the agency’s rate of compliance. Because of restrictions imposed by confidentiality laws, these agencies do not receive information on the specific names of individual employees whom IRS has identified as not complying with the nation’s tax laws. The broad objectives of FERDI are to enhance the federal government’s tax administration process by improving the compliance of federal employees and annuitants with their responsibility for filing tax returns and paying taxes, thereby helping to ensure the public’s confidence in the tax system. The program combines reaching out to federal agencies to raise their awareness of this issue and prioritizing IRS’ efforts to reduce its unpaid tax cases. Because of the potential ethical concerns and public perceptions related to federal employees and annuitants who do not comply with their tax responsibilities, IRS until recently adopted what it referred to as a “zero tolerance” policy for these cases. Specifically, IRS’ policy until January 2001 has generally been to actively pursue all known noncompliance cases involving federal workers and annuitants, without prioritizing by amount involved or potential for collection. In January 2001, IRS changed its prioritization system for FERDI cases and these now receive the same priority as the general population cases. According to IRS records, as of October 1999, over 390,000 federal workers and annuitants, or 4.5 percent of the total 8.7 million on-roll federal worker and annuitant population, owed about $2.5 billion in unpaid federal taxes. IRS records indicate that another 65,000 federal workers and annuitants had not filed tax returns and were identified by IRS as potential nonfilers. In total, IRS records indicated that as of October 1999, over 5 percent of the federal population had outstanding taxes, had not met its tax filing responsibilities, or both. This percentage compared favorably with the general population: IRS’ records indicated that as of October 1999, over 8 percent of the general population owed amounts to the government for unpaid taxes, had not filed tax returns, or both. Information recently reported by IRS indicated that, as of October 2000, 340,000 federal workers and annuitants owed cumulative unpaid taxes of about $2.5 billion, and another 85,000 federal workers and annuitants had not filed tax returns. This information indicated that, as of October 2000, slightly less than 5 percent of the federal worker and annuitant population owed taxes or had not filed tax returns as required, compared to a little over 7 percent for the general population. Based on these percentages, federal workers and annuitants appear to be more compliant than the general taxpayer population in meeting their tax obligations. However, these percentages and the amounts reported as owed to the federal government are affected by several factors. Not all taxpayers, including federal workers and annuitants, pay the amounts they owe the federal government. Some do not provide payments on their tax liability when they file their tax returns. Others underreport, either mistakenly or deliberately, the amounts they owe the government. Still others do not report the amounts they owe. To the extent that underreporting or nonreporting by taxpayers is not detected and corrected by IRS, the amount of unpaid taxes IRS identifies is understated. Conversely, not all amounts IRS identifies as unpaid taxes are actually owed by taxpayers; thus, the amount of unpaid taxes IRS identifies could be overstated. This is particularly true for cases in which IRS assesses additional taxes based on third-party-provided information, or when a taxpayer has not filed a tax return for a given period and IRS constructs a return for the individual based on third-party information. Erroneous third-party information can result in IRS’ erroneously assessing a taxpayer for amounts that are not owed. Also, when IRS assesses taxes based on third-party payment information, the assessed tax may be overstated because IRS cannot consider legitimate deductions that may apply and that could reduce or even eliminate the identified tax liability. In addition, if IRS errs in applying taxpayer payments, its records could reflect a tax liability that has already been paid. In other instances, IRS unpaid assessments include amounts being contested by taxpayers. In some cases, the taxpayers may even be due a refund. It is also important to note that, for both the federal and the general populations, the percentages noted above and the reported amounts of unpaid taxes include balances taxpayers owe that are being paid under installment agreements. The amount of unpaid taxes owed by the federal population as of October 1999 and October 2000 included about $660 million and about $650 million, respectively, owed by taxpayers who were in installment agreements with IRS. If these federal workers and annuitants were excluded from the population of federal workers and annuitants who were considered to be noncompliant, the percentages of the federal worker and annuitant population who owed taxes or had not filed tax returns as required as of October 1999 and October 2000 would decline to 3.3 percent and 3 percent, respectively. IRS’ difficulty in better determining noncompliance is affected by a number of issues including significant deficiencies in its systems and processes leading to delays in identifying noncompliant taxpayers and errors in taxpayer accounts, and resource allocation decisions and limitations. These issues are discussed later in this report under “Impediments Exist in Collecting Amounts Owed and Promoting Compliance.” According to IRS records, as of October 1999, the taxes owed by the over 390,000 federal workers and annuitants predominantly stemmed from their income. Nearly one-half of the outstanding amounts IRS reported as owed by these federal workers were identified through IRS’ enforcement programs. About one-third of these individuals owed taxes for more than one tax period and owed for extended periods of time, and about 56 percent of the total outstanding amounts dated back to before 1995. Federal annuitants accounted for 54 percent of the total outstanding amounts owed by federal workers and annuitants, while constituting 40 percent of the number of individuals with tax delinquencies. IRS employees were more compliant than the rest of the federal population; however, they are subject to special monitoring by IRS and can face substantial disciplinary actions for willful noncompliance. Our work indicates that a significant portion of the outstanding amounts owed by federal workers and annuitants is potentially uncollectible. The vast majority of federal workers and annuitants owe taxes stemming from the income they earn. According to IRS records, as of October 1999, over 99 percent of the accounts owed by federal workers and annuitants was attributable to individual income taxes owed. It is important to note that such income taxes are not necessarily solely attributable to federal salaries or pensions. Some income may be attributable to other sources such as secondary nonfederal income, a spouse’s nonfederal income, or gains on sale of property. Among the less than one percent of federal workers and annuitants with outstanding taxes as of October 1999 that were not related to their income, approximately 2,300 individuals owed the government penalty assessments totaling $155 million resulting from IRS’ finding them to be willful and responsible for the failure to remit amounts withheld from employee salaries for payroll taxes. In some instances, these individuals were assessed for multiple periods of withheld but nonremitted payroll taxes—the 2,300 individuals owe outstanding penalties on 3,019 separate tax accounts. In one case we reviewed, we found that IRS had assessed a retired federal employee for withholding and not forwarding to the government payroll taxes he withheld from employees of two businesses he started after retiring. In each of these two businesses, the individual had withheld taxes from his employees’ salaries for 17 separate periods without forwarding the withheld funds to the federal government. IRS subsequently assessed the individual over $1.6 million in trust fund recovery penalty assessments. IRS records indicated that 48 percent of the cumulative amounts all federal workers and annuitants owed as of October 1999 was identified by IRS through its various enforcement programs. These amounts were attributable to nonfilers and underreporters and were not due to mathematical errors identified by IRS that were made by the taxpayers when preparing their tax returns. Our statistical sample of 140 unpaid tax cases involving federal workers and annuitants reinforces these statistics. In 55 of the cases (39 percent), some or all of the taxes owed were identified as a result of IRS’ enforcement programs, rather than through the taxpayers’ own reporting. Comparably, for the general population, IRS identified, through its various tax enforcement programs, 37 percent of the cumulative amounts owed according to IRS records as of October 1999. According to IRS records, 36 percent of federal workers and annuitants with outstanding unpaid tax assessments as of October 1999 owed taxes for multiple periods or years. This proportion was consistent with that of the general population; according to IRS records, about 37 percent of the general taxpayers with outstanding taxes as of October 1999 owed for more than one tax period. Over 390,000 federal workers and annuitants owed outstanding taxes on over 690,000 separate accounts, each account representing a tax period. Table 1 provides a breakdown of the federal workers and annuitants by number of tax accounts owed. In addition, most of the amounts owed by federal workers and annuitants had been outstanding for a number of years. As of October 1999, about 200,000 separate accounts (29 percent of the total number of accounts) related to taxes assessed for years before 1995. These accounts totaled about $1.4 billion and represented 56 percent of the nearly $2.5 billion total balance in tax assessments identified by IRS as owed by federal workers and annuitants. About 23 percent, or $576 million, dated back to before 1990. In contrast, as of October 1999, 79 percent of IRS’ total balance of unpaid assessments dated back to before 1995, and 40 percent pertained to amounts owed for tax years before 1990. Table 2 provides a breakdown of the number and associated outstanding balances owed by year in which the tax was due. As our previous work on unpaid assessments shows, the longer a tax liability remains outstanding, the lower the likelihood that IRS will be able to collect the outstanding amount. Further, because IRS continues to accrue significant amounts of interest and penalties on these delinquent taxes as they age, additional amounts having a lower likelihood of being collected are added to IRS’ balance of unpaid assessments. IRS records indicated that 55 percent of the outstanding balance of unpaid taxes federal workers and annuitants owed as of October 1999 consisted of interest and penalties. As discussed earlier, according to IRS records, as of October 1999, over 5 percent of federal workers and annuitants had or potentially had outstanding federal taxes, had not filed tax returns and were thus potential nonfilers, or both. This percentage was fairly consistent between federal workers and federal annuitants: 5.5 percent for active federal workers and 5 percent for federal annuitants. However, according to IRS records, federal annuitants owed, on average, 50 percent more per account than active federal workers. While the average account balance for federal annuitants was $4,387, the average account balance for the active federal workers was $2,962. As a result, as indicated in table 3, federal annuitants owed 54 percent of the nearly $2.5 billion in unpaid taxes while accounting for 40 percent of the population. Several factors account for this difference. For one, federal and nonfederal retirees receiving civil service or private-sector retirement pension or annuity payments have the option to waive tax withholdings. This treatment contrasts with that for active employees, both federal and nonfederal, who cannot claim an exemption from withholding unless they meet certain conditions. The treatment of civil service and private-sector retirees also differs from that of U.S. Armed Forces annuitants, since periodic pension or annuity payments for the latter (as well as certain other types of payments) are defined as wages and thus are subject to income tax withholding. If annuitants elect not to have amounts withheld and do not make the appropriate financial adjustments, they increase the risk of finding themselves without the means to pay their tax obligations. Discussions with IRS officials at several field offices we visited, and many of the cases we reviewed in our statistical sample, indicate that one underlying cause of tax delinquencies by federal annuitants is the lack of withholding of amounts from pension payments throughout the year to ensure that the individual is not faced with a substantial tax liability at the end of the year. In 14 (19 percent) of the 73 unpaid tax cases we reviewed involving federal retirees, the lack of adequate tax withholdings or the absence of any withholdings contributed to substantial tax liabilities at the end of the year. Another factor contributing to the difference is that without automatic tax withholdings from pension payments and without the means to pay amounts due, annuitants’ accounts are often older than those of active federal workers. About 4 percent of the accounts and 15 percent of the outstanding balance owed by active federal workers as of October 1999 dated back to before 1990. In contrast, 9 percent of the accounts and 30 percent of the outstanding balance owed by federal annuitants predated 1990. Because penalties and interest continue to accrue on outstanding unpaid taxes, the longer an account remains outstanding, the greater the extent to which the original taxes are increased by the added penalties and interest. Over time, the penalties and interest can grow to the point where they significantly exceed the original balance due. IRS records show that penalties and interest charges, both accrued and assessed, accounted for 59 percent of federal annuitants’ average account balance as of October 1999, compared with 50 percent of federal workers’ average account balance. IRS views compliance by its employees as critical to its mission as the nation’s tax collector. In its rules of ethical conduct, IRS expands on the ethical guidelines contained in Executive Order 12674 related to financial obligations. IRS’ rules of conduct specifically stress the requirement that its employees promptly and properly file all tax returns, and that properly filing tax returns includes providing the appropriate payments as reflected on the return. IRS bases this requirement on the fact that, by virtue of IRS’ mission, the public must have confidence in its integrity, efficiency, and fairness. IRS’ rules of ethical conduct do allow the employee the same rights with respect to tax issues as those afforded the general public, such as the ability to file an extension or enter into an installment agreement to pay any outstanding amounts. However, the rules specifically note that failure to adhere to the filing requirements may result in disciplinary action up to and including termination of employment. Also, the Internal Revenue Service Restructuring and Reform Act of 1998 (RRA98) imposed more stringent requirements on IRS employees, with some sanctions as severe as terminating their employment. Specifically, Section 1203 of the act cites two specific instances in which the commission of such violation could result in the employee’s termination: (1) willfully failing to file required tax returns, unless such failure is due to reasonable cause and not willful neglect (Section 1203(8)), and (2) willfully understating a tax liability, unless such understatement is due to reasonable cause and not willful neglect (Section 1203(9)). IRS has an Employee Tax Compliance Program to monitor the compliance of its workers with its filing and tax requirements. The program is designed to identify IRS employees who have filed or paid their taxes late, are delinquent in paying any balance due, or for whom IRS has no record of a tax return having been filed. The program is centralized at IRS’ Cincinnati Service Center, which periodically matches IRS’ automated personnel records against its master files—its detailed database of taxpayer accounts—and downloads any matches into a separate Employee Tax Compliance database. Program personnel review these data to identify the potential compliance issue, and if they determine an infraction has occurred, refer the issue to the employee’s labor relations office for review. Depending on the nature of the issue identified, certain disciplinary action may be warranted. It is important to note that potential non-Section 1203 violations are dealt with in a different manner. Examples of the potential non-Section 1203 issues and disciplinary actions are reflected in table 4. The policies and procedures for non-Section 1203 violations apply to all IRS employees regardless of grade level. The only distinction is that cases involving Senior Executive Service (SES) employees and GS-15 employees are handled at a central labor relations office at IRS headquarters. If IRS personnel responsible for the Employee Tax Compliance program determine that the violation falls within the provisions of Section 1203, the case is brought before a Central Adjudication Unit at IRS headquarters for review. If the unit determines that a Section 1203 violation exists, the case is brought before the IRS Commissioner’s 1203 Review Board for final disposition. The board, which is chaired by the IRS Deputy Commissioner for Operations, can either terminate the employee or recommend that the IRS Commissioner mitigate the disciplinary action. After the final determination, the employee has the right to due process and can appeal the final decision. From June 1999 through July 2000, 77 cases involving Section 1203 violations were brought before, and reviewed by, the Commissioner’s 1203 Review Board. Of these cases, 38 resulted in the dismissal of the employee, 29 resulted in disciplinary actions less severe than termination due to a finding of mitigating factors, and 10 were still pending disposition. Through its program, IRS identified 3,255 of its employees who either had outstanding taxes or had not filed tax returns as of October 1999. The 3,255 employees with outstanding taxes or unfiled tax returns represented about 3.3 percent of IRS’ overall population at that time. More recent information reported by IRS showed that as of October 2000, 2,975 of its employees, or 3.1 percent of its overall workforce at that time, either had outstanding taxes or had not filed tax returns. While the agency has employees it believes are not complying with the nation’s tax laws, these percentages reflect a better rate of tax compliance than those for the rest of the federal government and the nation’s taxpayers. As with the general population, not all amounts owed or identified by IRS as being owed by federal workers and annuitants are collectible. A review of IRS’ records and a statistical sample of cases from a subpopulation of the amounts owed by federal workers and annuitants indicate that a significant portion of the outstanding amounts owed by federal workers and annuitants is not likely to be collected. In reviewing cases in which IRS claims amounts are owed, we focused on the collectibility of such amounts and not on the legitimacy of IRS’ claims. IRS’ records indicate that the current status of many accounts makes collection of the outstanding taxes associated with these accounts doubtful. IRS classified about $390 million of the outstanding taxes owed by federal workers and annuitants as currently not collectible (CNC)because of various factors, such as (1) the taxpayer lacks the financial resources to pay the amounts owed, (2) the taxpayer is deceased, or (3) IRS is unable to contact or locate the taxpayer, despite the fact that these individuals are receiving federal salary or benefit payments. Also, about $180 million was owed by individuals who were in bankruptcy or other litigation proceedings as of October 1999. In total, $570 million of the outstanding amounts owed by federal workers and annuitants were classified by IRS as CNC or the taxpayers were in bankruptcy or involved in litigation. We reviewed a statistical sample of 152 unpaid taxes from a subpopulation of $861 million in outstanding taxes owed by federal workers and annuitants as of October 1999. Based on our review, we estimate that 32 percent of the outstanding balance of this subpopulation will likely be collected. In reviewing the cases we selected, we determined that 12 cases (8 percent) were not valid since no tax liability should have been recorded as outstanding as of October 1999. We determined that a case was invalid if (1) the tax assessment recorded against the taxpayer as of October 1999 was erroneous or (2) payments received before the October 1999 reporting date fully satisfied the tax liability. Consequently, of the 152 cases we reviewed, 140 represented valid tax liabilities of federal workers and annuitants as of October 1999. We categorized the remaining 140 selected sample cases as either uncollectible, partially collectible, or fully collectible, based on our estimate of collectibility for each case. Figure 1 provides a breakdown of the valid cases we reviewed by category. As figure 1 indicates, in 58 of the 140 valid cases (41 percent) we reviewed, we found evidence that IRS would likely collect some or all of the outstanding amounts. In contrast, for 82 cases (59 percent), we found no evidence to indicate that IRS would collect any of the outstanding amounts. IRS’ effectiveness in collecting the outstanding unpaid taxes federal workers and annuitants owe and in promoting these taxpayers’ compliance with their tax responsibilities is adversely affected by several significant impediments. These include significant systems and process deficiencies, which (1) affect its ability to promptly identify and assess taxes, and (2) affect the accuracy of taxpayer accounts; and resource allocation decisions and limitations, which may hinder IRS’ ability to both assess and collect taxes owed. These impediments, which impact IRS’ effectiveness in enforcing the tax code with respect to federal workers and annuitants, also affect IRS’ efforts to collect taxes owed and promote compliance among the general taxpayer population. IRS’ programs to identify underreporters or nonfilers can generally take years to identify and assess taxes, significantly hampering IRS’ ability to collect these taxes. In addition, we continue to report serious deficiencies in IRS’ financial management and operational systems and processes that affect the accuracy of taxpayer accounts. These conditions continue to result in unnecessary taxpayer burden and lost opportunities to collect amounts owed. We have previously reported on these issues and have provided recommendations for corrective action, including (1) ensuring IRS’ ongoing systems modernization effort includes the development of a subsidiary ledger to accurately and promptly identify, classify, track, and report all IRS unpaid assessments by amount and taxpayer, (2) manually reviewing and eliminating duplicate or other assessments that have already been paid off to assure all accounts related to a single assessment are appropriately credited for payments received, and (3) better monitoring its procedures requiring freeze codes be entered on all accounts of taxpayers IRS determines are potentially liable for unpaid taxes. IRS has acknowledged these issues and is working to address them. IRS uses various enforcement programs to identify individuals who have inaccurately reported or failed to report their tax liabilities. IRS’ underreporter program attempts to identify underreported taxes by verifying tax return data with other third-party-supplied information, such as wage and earnings statements. IRS’ nonfiler program attempts to identify taxpayers who failed to file tax returns. However, these programs can only potentially assess underreported or unreported taxes. The process of then determining whether amounts are, in fact, owed and then trying to collect these outstanding amounts from taxpayers is the other critical element involved. IRS’ various enforcement programs can take several years to identify and assess the taxes against an individual for taxes owed. Of the 140 valid federal worker and annuitant cases, 55 were cases in which IRS identified taxes owed through its various enforcement programs. Of these 55 cases, 15 cases (27 percent) took over 3 years and 4 cases (7 percent) took over 5 years from the date the taxes were initially due until IRS assessed the taxpayer for the outstanding amounts. In one case we reviewed, a federal employee had not filed tax returns for 4 years from 1988 through 1994. For the unfiled 1988 return, IRS was able to construct a substitute tax return in late 1994, yet IRS then took another 6 months to record the unpaid tax assessment in the taxpayer’s account. During both our fiscal year 1999 and 2000 financial audits, we continued to find significant errors in taxpayer accounts. These errors included (1) failing to record payments received to all related taxpayer accounts, (2) delays in recording payments to related taxpayer accounts, and (3) delays in recording assessments in taxpayer accounts. The omissions and delays in recording activity resulted in numerous errors, such as issuing refunds to taxpayers who owed taxes and erroneously assessing taxpayers who were actually due refunds. These errors resulted in both a burden to taxpayers and lost revenue to the federal government. In our sample of federal worker and annuitant cases, we continued to find deficiencies in IRS’ systems and processes that affected the accuracy of taxpayer accounts. For example, we found a case in which, due to an IRS input error, a federal worker erroneously received a refund of $500,000 from IRS. IRS identified the mistake in June 1999 and assessed the individual for that amount. The individual returned the refund check, and the taxpayer’s account was corrected in October 1999. In another case, a federal employee did not file a tax return in 1994. IRS prepared a substitute tax return for this federal worker and used it as a basis for assessing the individual. However, in preparing the return, IRS used an erroneous W-2 that showed wages of $3,000,000. The taxpayer’s true wages were $17,000. The error was eventually detected when the revenue officer assigned to the case noticed that the wages seemed very high and requested a new W-2. We also found instances in which IRS had not recorded payments received on outstanding tax account balances promptly. In one case, a federal worker had established that he had paid his taxes in 1992 yet, as of October 1999, IRS’ records still identified the individual as owing taxes. In total, in 12 of the 152 cases we reviewed, the tax assessment recorded against the taxpayer was either erroneous or the account should have had no outstanding balance because payments had already been received that fully satisfied the tax liabilities. Mistakes such as these erroneously increase any measure of noncompliance of both federal workers and annuitants, and the general population and can result in burden to the taxpayer. As we have reported previously, IRS does not follow up on all cases that involve potential underreported or nonreported tax, nor does it always actively pursue cases with some collection potential. IRS attributes this to the need to allocate its limited resources among competing priorities. Nonetheless, this significantly impedes IRS’ ability to pursue collection of outstanding taxes owed and creates the potential for increased noncompliance. IRS does not investigate all tax returns identified as having potential underreported taxes. For example, for tax year 1996, IRS screened 155 million individual income tax returns and found that about 12 million (8 percent) had potential underreported taxes totaling at least $15 billion. However, IRS investigated only about 3.1 million (26 percent) of these returns, accounting for estimated underreported taxes due of about $5.2 billion (35 percent). Consequently, about $10 billion in potential underreported taxes went uninvestigated and thus will likely not be pursued for possible collection. More recent statistics show this is a continuing problem. IRS’ screening of individual tax returns for tax year 1998 identified over 14 million individual tax returns that had potential underreported taxes totaling $15.4 billion, yet IRS investigated only 2.5 million (18 percent) of these cases accounting for about $6.5 billion (42 percent) of the total underreported taxes. This limited investigation activity results in billions of dollars in potential unpaid taxes annually that are not pursued. This limitation also affects IRS’ ability to accurately assess the level of noncompliance, both for the general population and for the population of federal workers and annuitants. In addition, IRS also does not always actively pursue cases in which outstanding taxes have been assessed, resulting in potentially billions of dollars in lost revenue to the government. During both our fiscal year 1999 and 2000 financial audits, we found a number of cases that IRS was not actively pursuing, including some in which we noted that the taxpayer had financial resources to pay at least some of the amounts owed. IRS enforcement data indicate that from fiscal years 1997 through 2000, the number of case dispositions and the number of revenue officers available to work those cases declined. Enforcement activities such as lien filings, levy notices, and seizures all showed substantial declines during this period. IRS attributes its inability to pursue such collections to a decrease in staff, reassignment of collection employees to support customer service activities, and additional staff time needed to implement certain taxpayer protections that were included in RRA98. Despite IRS’ “zero tolerance” policy then in effect for federal workers and annuitants with outstanding taxes, we also found cases in our sample in which IRS was not actively pursuing some federal workers and annuitants who had resources that could have been used to pay some of the amounts owed. Further, of the $390 million in outstanding taxes owed by federal workers and annuitants that IRS classified as CNC, about 580 cases, with a total outstanding balance of over $1.8 million, appeared on IRS’ records as closed due to resource and workload constraints, despite IRS policy that all federal worker and annuitant cases be actively pursued. As we have previously acknowledged, like any large agency, IRS is confronted by the ongoing management challenge of allocating its limited resources among competing priorities. However, IRS does not have the management data necessary to prepare reliable cost-benefit analyses to ensure that its resource allocation decisions are appropriate. We have previously reported on this issue and recommended that, using the best available information, IRS develop reliable cost-benefit data relating to its enforcement and collection programs. Cost-based performance information on enforcement and collection activities combined with an assessment of the benefits to be derived from such actions could enable IRS to better judge whether it is optimizing its allocation of available resources among competing management priorities. IRS must consider the legal environment in which it operates in attempting to both collect from, and improve compliance by, federal workers. Specifically, IRS must adhere to laws governing the disclosure of taxpayer information. These laws have been established to protect the privacy of taxpayers, and IRS must work within this legal framework in its attempts to promote compliance among federal workers and annuitants. Section 6103 of the Internal Revenue Code (IRC) allows disclosure of taxpayer information to federal agencies in limited circumstances. For example, IRS is authorized to share taxpayer information to assist agencies in enforcing and determining eligibility requirements for child support programs, family assistance programs, and Medicaid. IRS can also share taxpayer information with agencies if the taxpayer has consented to the disclosure of this information with the agency. A federal agency becomes aware of an employee’s tax delinquency status if (1) the employee voluntarily discloses this information to the employer, (2) the employee enters into a payroll deduction agreement to pay off the outstanding tax debt, (3) IRS files a federal tax lien and the lien is brought to the attention of the employer, (4) the employer receives a summons from IRS regarding an employee tax liability, or (5) the employee is criminally charged with tax violations and these charges become public. IRS is authorized to collect outstanding taxes that federal employees owe by garnishing, or levying, the employees’ salaries. In these instances, IRS serves a Notice of Levy on the employing agency’s payroll office or agent. By law, IRS can communicate the names of these individuals to an agency’s payroll office for purposes of levying against an employee’s wages. However, whereas private nonfederal payroll offices are not prohibited from sharing such information with management, it is unclear whether federal workers in an agency’s payroll office can, in turn, communicate these names to the agency’s personnel office for follow-up action without violating IRC Section 6103. IRS questioned whether a federal agency’s payroll office could legally disclose the tax delinquency status of employees to the agency’s personnel or labor relations offices for appropriate review and, if warranted, disciplinary action. In late December 1999, both IRS’ legal counsel and the U.S. Department of Justice concluded that, while such use of return information may be permissible, the issue is a close legal question and IRS should thus not encourage this practice. Instead, both IRS’ legal counsel and the Department of Justice concluded that IRC Section 6103 should be amended to specifically permit IRS to disclose information on the tax delinquency status of federal employees to the head of the employing agency to determine if an ethics violation has occurred. RRA98 contained a requirement for both the Joint Committee on Taxation and the Secretary of the Treasury to each conduct a study on the scope and use of IRC Section 6103 provisions regarding taxpayer confidentiality. The Joint Committee’s study was issued in January 2000 and contained no recommendations on amending the provisions of Section 6103 that presently exist. The Treasury study, which was issued in December 2000, recommended amending Section 6103 with respect to sharing information on federal employee tax delinquencies with the employing agency. Specifically, the study recommended that Section 6103 be amended to clarify that federal employees working in federal payroll offices who receive tax information pursuant to Section 6103(k)(6) are not subject to redisclosure restrictions of Section 6103 for such information. If enacted, this recommendation would, for example, clear payroll employees to disclose to agency management information received in connection with the placement of a levy on an employee’s wages. IRS’ FERDI program was intended to identify and highlight the degree of compliance with federal tax laws among federal workers and annuitants and in so doing to assist IRS in improving compliance among this segment of the taxpayer population. However, it is unclear what impact this program has had in increasing tax compliance by federal workers and annuitants. While the FERDI program has been in place since 1992, IRS has not assessed the effectiveness of the program in meeting its intended objectives. Also, IRS has not determined the degree to which participating agencies communicate the information IRS provides them on the results of the program matches to their workforce. According to IRS records, since 1995 the percentage of the federal worker and annuitant population that either owes or potentially owes taxes or has not filed tax returns has fluctuated between 4.7 percent and 5.6 percent and has not shown a consistent trend toward an increase in compliance. There is no information available on the percentage of federal workers and annuitants with actual or potential tax liabilities or unfiled tax returns before the FERDI program was implemented which could be used as a benchmark. Also, IRS has refined its analyses over the last several years. Thus, it is difficult to draw any conclusions related to trending data in determining the effectiveness of the program. As discussed earlier, agencies that participate in the FERDI program and agencies for which IRS is able to match its records of outstanding taxes or unfiled tax returns using W-2 information annually receive a letter from IRS informing them of the number of employees with outstanding taxes or unfiled tax returns. These letters also contain IRS’ assessment of the agency’s rate of compliance. However, IRS has not followed up with agencies to determine whether and in what manner the results of the matching process are communicated to agency employees. Such information could help IRS assess the degree of correlation, if any, between agencies that proactively communicate the results of the matching process to their workforce and improved rates of compliance. The Taxpayer Relief Act of 1997 allows for IRS, through Treasury’s Financial Management Service (FMS), to collect on outstanding tax obligations by applying a continuous levy of up to 15 percent against certain federal payments to be made to individuals and businesses. The continuous levy program began a phased-in implementation in July 2000. This program should assist in collecting some of the outstanding taxes owed by federal workers and annuitants. However, not all federal payments are presently covered under the program and the levy provisions may be insufficient to allow for full repayment of many of the amounts these individuals owe. Payments subject to the continuous levy program will eventually include certain Social Security benefits, agency vendor payments, Railroad Retirement Board benefits, and federal salary and all retirement payments. In July 2000, Treasury began to levy vendor payments as well as certain federal retiree payments. Officials we spoke with at FMS have indicated that they expect to have certain Social Security benefits, civilian federal salaries that are paid through FMS, and military salary and pension payments under the program over the next year. This program, when fully implemented, should help IRS collect some of the outstanding amounts owed by federal workers and annuitants. However, it is important to note that some of the delinquent tax accounts would still not be subject to levy because of their current condition or status. For example, IRS and FMS exclude from levy delinquent taxpayer accounts that are currently not collectible due to hardship, currently not collectible because the taxpayer is deceased, in bankruptcy or litigation, subject to a pending or approved offer in compromise, subject to a pending or approved installment agreement, or within 3 months of their collection statute expiration date. In addition to these requirements, those payments that could be subject to the continuous levy program might not have the full 15 percent deducted from the payments, depending on IRS’ and FMS’ determination of how much the individual can afford. It is also important to note that the continuous levy program by itself is not designed to be a mechanism for promoting federal workers’ and annuitants’ compliance with their tax obligations. It may provide another tool for IRS to collect on delinquent accounts, but it is unclear whether it can assist IRS in its efforts to obtain voluntary compliance by federal workers and annuitants in fulfilling their tax obligations before delinquencies occur. Voluntary compliance with tax laws is the foundation of the U.S. tax system. This foundation can be eroded if the general public perceives that federal workers and former federal workers successfully evade their tax obligations. IRS records indicate that federal workers and annuitants, and IRS workers in particular, appear to be more compliant in meeting their tax responsibilities than the general population. Nonetheless, there are some federal workers and annuitants whom IRS records indicate are not fulfilling their tax responsibilities and owe the federal government about $2.5 billion in outstanding taxes. In its attempt to improve management and collection of federal taxes owed by federal workers and annuitants, IRS faces the same issues hindering its ability to manage and collect unpaid taxes of the general population. In particular, serious internal control and systems deficiencies, which prevent IRS from having the routine and reliable information it needs to make informed decisions, and IRS’ inability to quickly identify and pursue potential nonfilers, assess estimated federal taxes owed, and pursue collection of unpaid federal tax assessments, affect its ability to collect amounts owed and to improve compliance among the federal population, thus precluding it from more effectively enforcing the tax code. We have previously reported on these issues and made numerous recommendations as well as presented matters for congressional consideration to address them. In particular, we have recommended that IRS, as part of its systems modernization efforts, develop a subsidiary ledger to accurately and promptly identify, classify, track, and report all IRS unpaid assessments by taxpayer. We have also made several recommendations to improve the accuracy of taxpayer accounts and mitigate instances of both taxpayer burden and lost revenue to the federal government. In addition, we have recommended that (1) IRS develop the capability to routinely and reliably measure the costs and benefits of its various collection and enforcement activities in order to make informed resource allocation decisions and (2) the Congress consider requiring IRS to include in any budget request for additional resources for its various collection and enforcement activities reliable aggregate cost-benefit information. IRS has acknowledged these issues and is continuing to work to address a number of them. With respect to IRS’ efforts to improve compliance among federal workers and annuitants, IRS must first be able to determine how effective its program for this purpose has been and what, if any, modifications are needed to ensure that the program meets its objectives. This includes obtaining information on the degree to which agencies share information on agencywide tax compliance with their workforce and determining whether such information sharing can be linked to improved compliance. We believe efforts to enhance the rate of compliance of federal workers in particular have merit. While we had not previously participated in IRS’ FERDI program, we have taken the necessary steps to voluntarily participate in the program going forward. To determine the degree to which IRS’ program to improve compliance by federal workers and annuitants with their tax obligations is achieving its objectives and to identify any modifications needed in the program to better enable it to achieve its objectives, we recommend that the Commissioner of Internal Revenue assess the effectiveness of the FERDI program in promoting compliance by federal workers and annuitants with the nation’s tax laws and, as part of this assessment determine the extent to which agencies communicate information on their compliance rates with their respective workforces, and whether such communication can be linked to improved tax compliance by agency employees. In commenting on a draft of this report, IRS stated that it recognized the impediments affecting its ability to collect taxes owed by federal workers and annuitants discussed in this report. IRS further stated its intention to use its ongoing modernization efforts and recent reorganization to improve its ability to manage and collect unpaid taxes of federal workers and annuitants. IRS also mentioned certain changes it recently made in its administration of FERDI, including the transferring of the program to its recently created Wage and Investment business operating division, centralizating all FERDI accounts in Automated Collection System (ACS) status into one ACS call site to improve case handling and customer service, and establishing the same priority for federal employee and retiree cases as used for cases of the general population. Regarding our recommendation to conduct an assessment of FERDI’s effectiveness in promoting compliance by federal workers and annuitants, IRS stated that it would explore the possibility of conducting a research study to assess the program’s effectiveness. IRS did stress the efforts it had made since 1993 to improve the program’s effectiveness and stated that it tracked delinquency rates by agency and category annually. IRS agreed with our recommendation to determine the extent to which agencies communicate their compliance rates with their respective workforces, and whether such communication can be linked to improved tax compliance by agency employees. IRS will address this recommendation by first requesting the needed information from the agencies. The complete text of IRS’ response to our draft report is included in appendix III. We are sending copies of this report to the Chairman and Ranking Minority Members of the Senate Committee on Appropriations; Senate Committee on Finance; Senate Committee on Governmental Affairs; Senate Committee on the Budget; Subcommittee on Treasury, General Government, and Civil Service, Senate Committee on Appropriations; Subcommittee on Taxation and IRS Oversight, Senate Committee on Finance; Subcommittee on Oversight of Government Management, Restructuring, and the District of Columbia, Senate Committee on Governmental Affairs; Subcommittee on International Security, Proliferation, and Federal Service, Senate Committee on Governmental Affairs. We are also sending copies of this report to the Chairman and Ranking Minority Members of the House Committee on Appropriations; House Committee on Ways and Means; House Committee on Government Reform; House Committee on the Budget; Subcommittee on Treasury, Postal Service, and General Government, House Committee on Appropriations; Subcommittee on Government Efficiency, Financial Management, and Intergovernmental Relations, House Committee on Government Reform; and Subcommittee on Oversight, House Committee on Ways and Means. In addition, we are sending copies of this report to the Chairman and Vice-Chairman of the Joint Committee on Taxation, the Commissioner of Internal Revenue, the Secretary of the Treasury, the Director of the Office of Management and Budget, and other interested parties. Copies will be made available to others upon request If I can be of further assistance, please contact me at (202) 512-2600. This report was prepared under the direction of Steven J. Sebastian, Acting Director, Financial Management and Assurance, who can be reached at (202) 512-3406. Other contacts and key contributors to this report are listed in appendix IV. To determine the extent to which assessed taxes are not remitted to the federal government by federal workers and annuitants, we analyzed data from IRS’ FERDI file and from its accounts receivable dollar inventory (ARDI) system as of October 1999 as well as employee and annuitant personnel data from the Office of Personnel Management, to identify the following information relating to federal workers and annuitants: (1) the total number of unpaid federal tax accounts, (2) the total dollar amount of unpaid taxes, including tax assessment, interest, and penalties, (3) the age of these unpaid federal tax accounts, (4) the total number of federal workers and annuitants with unpaid federal taxes, and (5) the current status of the taxpayer accounts and the classification of these accounts as current employees and annuitants. We also considered information recently reported by IRS on the results of its FERDI matches as of October 2000. We did not specifically audit the data in IRS’ systems used in our various analyses and reviews. To determine how the level of outstanding taxes owed by federal workers and annuitants compares with those owed by the overall population, we matched the IRS FERDI and ARDI files as of October 1999 using those delinquent accounts and amounts present in ARDI and then analyzed the ARDI files for information about the overall population. To determine the effectiveness of IRS’ efforts in enforcing the tax code with respect to federal workers and annuitants, we reviewed a statistical sample of a subpopulation of federal employees and annuitants with unpaid taxes per IRS records as of October 2, 1999. As agreed to with our requesters, we used data in IRS’ records as of October 1999 because this was the latest available information on federal workers and annuitants and was the basis for IRS’ last published information on taxes pertaining to federal workers and annuitants at the time we commenced our fieldwork. Specifically, the objectives for our sample were to determine an estimate of the amount IRS could reasonably expect to collect on the subpopulation of unpaid assessment balances and to gauge the degree of IRS’ collection efforts by reviewing specific cases. The sample population was developed from the federal employee and annuitant caseload of six IRS field offices. These offices were selected based on their proportion of the dollar value of outstanding taxes owed by federal workers and annuitants to the total dollar value owed by the entire federal worker and annuitant population. These six offices together accounted for $861 million, or 35 percent of the total federal worker and annuitant unpaid assessments of $2.5 billion as of October 1999. While the sample of unpaid assessments was statistically representative of those taxpayers under the jurisdiction of the field offices included in the subpopulation, it is not strictly representative of the entire population of federal workers and annuitants with unpaid assessments as the sample, according to the agreement with our requesters, was not selected from that entire population. The population and associated amounts were obtained from the information contained in the FERDI file as of October 2, 1999. The FERDI file contains information on taxpayers for which (1) a third-party information match identifies a potential nonfiler condition and a tax assessment has not been made against the taxpayer’s account and (2) IRS has assessed taxes based on a filed return or a completed nonfiler investigation or other investigation, and the taxes remain unpaid. Using the FERDI file, we summarized unpaid assessment balances in the following 6 selected IRS field offices: Los Angeles, Oakland, Laguna- Niguel, Baltimore, Richmond, and Atlanta. The field offices were selected based on the extent of unpaid tax assessment balances. From the subpopulation, we selected a statistical sample of unpaid taxpayer accounts on which to conduct detailed testing using a classical variables sampling approach. We used classical variables sampling to project a statistically valid estimate of the amount of unpaid assessments that IRS could reasonably expect to collect from that subpopulation. We stratified the population into five dollar ranges to (1) decrease the effect of variances in the subpopulation, (2) gain assurance that the sample amounts were representative of the subpopulation, and (3) obtain assurance that the resulting net collectible amount is a reliable estimate of the amount IRS can reasonably expect to collect. Separate random samples were then selected for four of the five strata. For the remaining strata, which consisted of unpaid assessment items in excess of $500,000 individually, all items were selected for testing. We used a 95-percent confidence level, and a planned precision level of plus or minus $96.6 million. This approach resulted in a total sample size of 152 unpaid tax accounts, totaling $47.3 million or 5.5 percent of the subpopulation of unpaid assessments. To determine if and to what extent IRS could reasonably expect to collect the outstanding unpaid assessments for each sampled account, we examined detailed masterfile transcripts of the taxpayer’s accounts and IRS collection case files, which, when submitted, could include documentation of the taxpayer’s income and assets, earnings potential, other outstanding unpaid assessments, payment history, and other relevant collection information that affected our assessment of the taxpayer’s ability and willingness to pay. We also considered the extent and result of IRS’ documented efforts to collect the assessment amount. The methodology used was generally consistent with that used to estimate the collectibility of IRS’ unpaid assessments that represent federal taxes receivable under federal accounting standards, as reported by IRS in its annual financial statements. We projected the results of our assessments of the book value of the unpaid tax and collectibility for each sampled account to the subpopulation of FERDI unpaid assessments, using the Stratified Difference method. This projection yielded an estimate of the gross unpaid assessments amount with an achieved precision of $64.8 million and an estimate of the collectible amount with an achieved precision of $78.8 million. To further understand federal worker and annuitant delinquencies, we supplemented the sample of 152 cases with a nonrepresentative selection of 32 additional federal worker and annuitant cases in which the individual had multiple periods of outstanding taxes, although these were not considered in projecting our estimate of collectibility to the subpopulation from which we sampled. To determine what impediments, if any, exist which affect IRS’ ability to collect the unpaid taxes owed by federal workers and annuitants, we conducted interviews with IRS revenue officers, group managers, FERDI program personnel, and attorneys from IRS’ Office of Chief Counsel. We reviewed Section 6103 of the Internal Revenue Code, the Internal Revenue Service Restructuring and Reform Act of 1998 (RRA98) as well as the Study on Present Law Taxpayer Confidentiality and Disclosure Provisions prepared by the Staff of the Joint Committee on Taxation and the Report to the Congress on Scope and Use of Taxpayer Confidentiality and Disclosure Provisions prepared by the Office of Tax Policy of the Department of the Treasury. Also, we obtained and reviewed available information from IRS on its FERDI and Employee Tax Compliance programs. To determine the ethics standards and codes of conduct federal workers and annuitants are required to follow, we conducted interviews with Office of Personnel Management (OPM) and Office of Government Ethics (OGE) personnel. To obtain an understanding of IRS’ process for ensuring compliance with provisions of federal tax laws among its employees, we interviewed key IRS employees responsible for the employee tax compliance program as well as employees responsible for implementing provisions of RRA98. We obtained copies of internal documents and discussed with IRS’ Office of Chief Counsel legal issues pertaining to the program. We obtained extracts of current and closed tax related issues from IRS’ Automated Labor and Employee Relations Tracking System, IRS’ database system that tracks closed and ongoing potentially reportable IRS personnel issues. We also obtained a copy of IRS’ database of employee tax compliance cases that have been reviewed by the IRS’ 1203 Commissioner’s Review Board, specifically created for the purpose of reviewing IRS employee tax compliance cases initially deemed to be violations of Section 1203 of RRA98. We analyzed the information contained in these databases to provide observations on the effectiveness of IRS’ process for ensuring compliance with federal tax laws among its employees. In conducting our work, we did not specifically assess IRS’ controls or the completeness and accuracy of IRS records, although we did make certain observations, contained in this report, from both our sample analysis of unpaid accounts and other work performed as part of our annual audits of IRS’ financial statements. We conducted our work at IRS’ national office in Washington, D.C., and at the Los Angeles, Oakland, Laguna-Niguel, Baltimore, Richmond, and Atlanta field offices from May 2000 through March 2001. We conducted our work in accordance with generally accepted government auditing standards. Our review of a statistical sample of 152 federal worker and annuitant tax cases identified 12 cases that were not valid unpaid tax cases as of October 1999. Of the remaining 140 cases, based on our review of available documentation contained in the case files, we categorized each case as either uncollectible, partially collectible, or fully collectible. The following subsections discuss the composition of each of these categories in more detail. Of the 140 valid cases of outstanding taxes owed by federal workers and annuitants that we reviewed, we determined, based on our review of IRS case files and other documentation, that 82 (59 percent) were uncollectible. The reasons for our conclusions are shown in figure 2. The 82 cases that we concluded were uncollectible were characterized as follows: In five cases, the taxpayers entered into installment agreements to pay the outstanding taxes. However, in three cases, the taxpayers had subsequently defaulted on the installment agreements, and in the other two cases, the agreements had been established or reestablished (subsequent to an earlier default) too recently to establish a payment history sufficient to estimate any collectibility. In seven cases, the taxpayers were in various stages of bankruptcy. In these cases, documentation in the case files provided no clear evidence that any payments that may arise from the bankruptcy proceedings would be available to pay the outstanding tax liabilities. In 15 cases, the taxpayers provided offers—called offers in compromise (OICs)—to pay off some of the outstanding amounts owed. However, in each case, documentation in the case files indicated that no amounts would be paid on the specific account we sampled or that collection was uncertain. For example, in seven of these cases, the taxpayer made an OIC that was pending review by IRS. However, the amounts offered would not be sufficient to pay any of the balance in our sample cases. In these instances, the taxpayers owed outstanding amounts for multiple accounts, and any payments that would be received from the taxpayers under the OIC would be applied to accounts with an earlier CSED. In five other cases, IRS accepted the OICs, but again, the offer amounts were not sufficient to pay any of the balances owed in the sampled cases. Of the remaining three cases, the case file documentation for one case did not provide sufficient evidence that the taxpayer had the financial resources to pay the amounts offered, and the case files for the other two cases did not provide sufficient evidence that (1) IRS was likely to accept the offer and (2) the individual had the financial resources to pay the amount being offered. In 27 cases, IRS designated the accounts as CNC, primarily due to its assessment that the taxpayers did not have the financial resources to pay any of the outstanding taxes owed. In many instances, the individuals involved were retired federal employees and evidence in the case files indicated that these individuals did not have the financial resources to pay the outstanding amounts owed. However, in one case we reviewed involving approximately $14,000 in outstanding taxes that IRS designated CNC, both the husband and wife were in the military and documentation in the case file indicated that as recently as 1998, they reported combined income of over $140,000. In 28 cases, a variety of reasons exist as to why the amounts owed are considered uncollectible. For example, in three cases, IRS was actually obtaining regular payments resulting from levies against salaries and other sources, yet these payments would not be sufficient to pay any of the amounts owed in the sampled accounts before they reach their CSEDs. In seven cases, IRS had been unable to locate or contact the individuals, despite their receiving regular federal payments. In 13 cases, the documentation in the case files provided no evidence of any recent collection actions taken by IRS against the individuals. Based on our review of IRS case files and other documentation, we determined that 30 of the 140 valid cases we sampled (21 percent) were partially collectible. The reasons for our conclusions are shown in figure 3. The 30 cases that we concluded were partially collectible were characterized as follows: In seven cases, the taxpayers entered into installment agreements to pay the outstanding taxes. However, in these cases, the amounts stipulated to be paid under the terms of the installment agreements would not be sufficient to repay all of the taxpayer’s outstanding balances and associated penalties and interest before the statutory collection periods expire, which is in violation of the Internal Revenue Code. In some of these cases, the taxpayers owed amounts for multiple years. Because IRS applies payments received under the installment agreements to the accounts with the earliest CSEDs, only a portion of the payments IRS was expected to receive would be available to apply to the sampled cases. In one case we reviewed involving a federal annuitant with 6 years of outstanding tax liabilities who had entered into an installment agreement, only 3 percent of the $93,000 total balance of the sampled case would be paid before the CSED for the account expires, assuming that the individual continued to make payments under the terms of the installment agreement. In four cases, the taxpayers submitted OICs to pay less than the full amount owed to satisfy the outstanding taxes. In three of these cases, the offers were pending and, at the time of our review, had not been accepted by IRS. Our estimates of collectibility in these cases were based on payments received from the taxpayer after October 1999. In the fourth case, IRS accepted the offer of $110,000 to satisfy the outstanding balance of over $500,000 owed by the individual; the offer amount in this case represented 22 percent of the total balance owed by the taxpayer. In three cases, the taxpayers were in various stages of bankruptcy. In these cases, documentation in the case files indicated that some payments from the bankruptcy proceedings would partially pay the outstanding tax liabilities. We based this expectation on evidence that the taxpayers’ assets would be sufficient to make these payments. In six cases, IRS was receiving regular payments through levies against the individuals’ salaries, retirement payments, or other assets, yet these payments would not be sufficient to fully pay the outstanding amounts owed by these individuals before the accounts reached their CSEDs. In the remaining 10 cases, the estimates of collectibility were based on payments actually received after our sample date of October 1999 or on IRS’ retention of refunds that would otherwise be owed to the taxpayer on subsequent tax years to reduce the outstanding balance owed on the sample case. Specifically, in 9 of these cases, some payments were actually received from the individuals after October 1999. However, there was no other evidence in the case file to determine the source of these payments or the prospects for their continuation. In the remaining case, the taxpayer filed a tax return claiming a refund for a subsequent period. Instead of paying the refund, IRS applied the amount to the outstanding balance owed by the taxpayer. Of the 140 valid cases of outstanding taxes owed by federal workers and annuitants that we reviewed, we determined, based on our review of IRS case files and other documentation, that 28 of these cases (20 percent) were fully collectible. The breakdown of these cases is shown in figure 4. The 28 cases that we determined were fully collectible were characterized as follows: In 20 cases, the taxpayers entered into installment agreements to pay their outstanding taxes and were current in their payments under the terms of the agreements. The proceeds to be received by IRS under the installment agreements would be sufficient to repay the sampled account and any accounts the taxpayer had with an earlier statutory collection expiration date. In seven cases, the amounts owed had been fully paid off by the taxpayers subsequent to our sample date of October 1999. In four of these cases the amounts had been paid as part of installment agreements. In the remaining case, we determined the amount would be fully collectible based on (1) the small amount owed in relation to the taxpayer’s income, and (2) the taxpayer’s record of compliance and of typically receiving refunds in prior years which should be available in the future to offset this liability if payments are not subsequently made. Staff making key contributions to this report were William Cordrey, David Elder, Meafelia Gusukuma, Sophia Harrison, Barbara House, Ted Hu, Jeffrey Jacobson, Andrea Levine, Veronica Mayhand, Patrick McCray, Charles Payton, Michael Wetklow, and Mark Yoder. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
Voluntary compliance with tax laws, the foundation of the U.S. tax system, could be undermined if the public perceives that federal workers and former federal workers successfully evade their tax obligations. Internal Revenue Service (IRS) records indicate that federal workers and annuitants, and IRS workers in particular, appear to be more compliant in meeting their tax responsibilities than the general population. Nonetheless, IRS records indicate that some federal workers and annuitants are not fulfilling their tax responsibilities and owe the federal government about $2.5 billion in outstanding taxes. In its attempt to improve management and collection of federal taxes owed by federal workers and annuitants, IRS faces the same issues hindering its ability to manage and collect unpaid taxes of the general population. With respect to IRS' efforts to improve compliance among federal workers and annuitants, IRS must first be able to determine how effective its program for this purpose has been and what, if any, modifications are needed to ensure that the program meets its objectives.
Other transaction authority was created to enhance the government’s ability to acquire cutting-edge science and technology in part through attracting contractors that typically have not pursued government contracts because of the cost and impact of complying with government procurement requirements. Because other transactions are exempt from certain statutes, they permit considerable latitude by agencies and contractors in negotiating agreement terms. For example, other transactions allow the federal government flexibility in negotiating intellectual property and data rights, which generally stipulate each party’s rights to technology developed under the agreement. Because these agreements do not have a standard structure based on regulatory guidelines, they can be challenging to create and administer. The Homeland Security Act of 2002 authorizes two types of other transactions: (1) prototype and (2) research and development. Other transactions for prototypes are used to carry out projects to develop prototypes used to evaluate the technical or manufacturing feasibility of a particular technology, process, or system. To use other transactions for prototypes, federal statute requires that one of three conditions be met: (1) significant participation by a nontraditional contractor, (2) parties to the transaction other than the federal government will pay at least one- third of the total project cost, or (3) the Chief Procurement Officer determines that exceptional circumstances justify the use of an other transaction agreement. Other transactions for research and development are used to perform basic, applied, or advanced research and do not require the involvement of nontraditional contractors. Almost all of S&T’s other transaction agreements have been for prototype projects and justified based on the involvement of nontraditional contractors. From fiscal years 2004 through 2008, S&T entered into at least 55 other transaction agreements to support 17 different projects. (For a description of the projects see app. II.) DHS entered into 45 agreements in fiscal years 2004 and 2005, when it first began using other transactions to support prototype development projects, based on the Department of Defense’s (DOD) guidance and, in some cases, with assistance from DOD contracting officers. Currently, DHS’s Office of Procurement Operations provides all contracting support, including that for other transactions, to S&T. S&T contracting officers explained that they have been more selective in choosing to use other transaction agreements in recent years. Since 2006, DHS has entered into fewer new agreements each year, while continuing to fund work under the initial agreements entered into in 2004 and 2005. (See fig. 1.) As of April 2008, according to DHS data, 21 agreements were active—including 1 agreement entered into in fiscal year 2008—and 33 agreements were closed. In fiscal year 2007, other transactions accounted for about $124 million (about 17 percent) of the S&T’s total acquisition activity of $748 million to fund and develop technology in support of homeland security missions. A small proportion of projects account for the vast majority of the funding for other transactions; in February 2008, we reported that the seven largest agreements accounted for over three-quarters of all obligations. DHS has used its other transaction authority to leverage the capabilities of nontraditional contractors in prototyping and research and development efforts. Most of S&T’s agreements have involved nontraditional contractors, including small businesses, at the prime or subcontractor level. The majority of the nontraditional contractors provided technologies or services that DHS described as significant to the efforts under S&T projects. S&T program managers stated that without the involvement of nontraditional contractors, some of the research efforts may not have been able to advance. We identified a total of 50 nontraditional contractors who participated in 44 (83 percent) of the agreements we examined, with multiple nontraditional contractors involved on 8 agreements. Half of these contractors had not recently worked for the government. Sixteen nontraditional contractors were prime contractors on agreements, while the other 34 were subcontractors. Nearly half of the nontraditional contractors were classified as small businesses. According to some S&T program managers, using the agreements reduced the administrative burden of working with the federal government and encouraged small businesses, in particular, to participate. Figure 2 shows the proportion of prime contractors and subcontractors by business size. Small business, subcontractor (12) Large business, subcontractor (12) Large business, prime contractor (5) Planned obligations for 25 of the 44 agreements involving nontraditional contractors total $117 million, which is 40 percent of the total dollars obligated through these agreements. In describing the roles of the nontraditional contractors, the agreements and supporting documentation we reviewed identified the majority of these roles as significant to the project’s successful completion. Program staff, contracting officers, and contractor representatives also highlighted several technologies and services that nontraditional contractors provided to S&T through the use of other transaction authority. Several agreements that we reviewed identified significant technologies and services provided by nontraditional contractors. For example, one agreement with a nontraditional contractor—the sole participant on the project—noted that the contractor’s sensor technology would be used to develop prototypes designed to detect chemical warfare agents. The agreement stated that the resulting prototype would help first responders assess and monitor the risks in an area after a suspected or known chemical attack. Similarly, one nontraditional subcontractor was involved under an agreement to develop a prototype for delivering robust detection and geographic information about bioterror attacks. The agreement stated this subcontractor would have a significant level of participation and a substantial role in the project, and possessed unique skills and expertise in the area of DNA microarrays, which was identified as a core technology for the system. In addition, the subcontractor was identified as the leader for all bioagent detection laboratory testing for the project, as well as for designing and performing the lab tests for all critical items in the development of the system. Program managers said some of the projects pursued under the agreements could not have advanced without the contributions of nontraditional contractors. For example, S&T staff told us that one project, the development and testing of a prototype device to counter the threat of shoulder-fired missiles to commercial aircraft, required the participation of nontraditional contractors. They said that the involvement of major commercial airlines and transport companies allowed S&T to test whether a certain military technology was suitable for a commercial application. In another case, the project manager said that the nontraditional contractor was the only company that held patent rights for the unique technology needed to develop a type of foot and mouth disease vaccine. According to the nontraditional contractor’s representative, the company would not have participated in the project under a FAR-based contract due to concerns about retaining intellectual property rights. The proportion of dollars obligated on each agreement for nontraditional contractors—which ranged from less than 1 percent to 100 percent—did not necessarily indicate the importance of the contractors’ contributions. For example, only 1 percent of one agreement’s obligations was allocated for work by a nontraditional subcontractor to develop chemical tests for a hazardous substance detection system. However, the prime contractor told us that this nontraditional contractor was the leading expert in the field and uniquely qualified to contribute to the project. In a similar example, only 3 percent of an agreement’s obligations were allocated for work by a nontraditional contractor to manufacture devices necessary for a mobile laboratory prototype. However, DHS considered these devices the heart of the project, and thus a significant contribution. Since we reported in 2004, DHS has continued to develop policies and practices for managing other transactions, issuing an operating procedure and a guidebook in May 2008, but has not fully addressed the need to assess its use of these agreements and maintain a contracting workforce. DHS has developed guidance and practices to minimize financial and program risks. However, DHS does not have information to systematically assess whether it is obtaining the full benefits of its other transaction authority. Finally, contracting officers with business acumen and training are critical to entering into and administering other transactions; however, it is unclear whether the present workforce is sufficient to support S&T’s operation. In 2004, we reviewed DHS policies and procedures and found they provided a foundation for using its other transaction authority, though refinements were needed. We reported that since the beginning of its use of other transactions, DHS has applied commonly accepted acquisition practices, such as using contractor payable milestone evaluations to manage other transaction agreements. Aspects of DHS’s review process for other transaction agreements are similar to those for contracts subject to the FAR. For example, DHS’s proposed sole source agreements must be explained and approved, and program and contracting offices, as well as its office of general counsel, review all proposed agreements. DHS’s guidance for prototype projects also encourages the use of fixed price agreements with fixed payable milestones to minimize financial and performance risks. We found that DHS has established fixed price agreements with fixed payable milestones in 44 of the 53 agreements we reviewed. Fixed price acquisitions generally transfer most of the financial risk to the contractor. The financial risk for both parties may be further limited in other transaction agreements by a provision that allows either the government or contractor to leave the program without penalty. In addition, the use of fixed price agreements mitigates concerns regarding cost controls, as the costs are fixed at the time the agreements are established. Payable milestones mark observable technical achievements or events that assist program management and focus on the end goal of the agreement. DHS guidance states that it is based on commercial best practices, in which the use of payable milestones gives industry opportunities to provide major input into milestone descriptions as well as the option to leave the program. One S&T program manager told us that a contractor opted to cancel an agreement at a payable milestone after determining it could no longer meet the goals of the program. DHS’s recent guidance also calls for considering when to include financial audit provisions in the agreements. Our 2004 report noted that the department lacked guidance on when to include such provisions—other than providing for access to GAO when the agreement is over $5 million. In May 2008, DHS issued a guidebook for the use of other transactions for prototypes, which now includes additional information on when audits should be conducted. Specifically, it states that audit provisions should be included when the payment amounts in the agreement are based on the awardee’s financial or cost records, or when parties other than the government are required to provide at least one-third of the total costs. The guidebook contains sample audit clauses that contracting officers should use or tailor to an individual agreement. The guidance also describes when these requirements apply to key participants other than the prime contractor. Two key benefits of using other transactions are to provide greater latitude in negotiating the allocation of intellectual property and data rights and to leverage the cutting-edge technology developed by nontraditional contractors. Knowledge gained from past projects supported by other transaction agreements could allow DHS to assess the extent to which these benefits are being obtained and inform planning to maximize benefits for future projects. Performance information can help agency managers to ensure that programs meet intended goals, assess the efficiency of processes, and promote continuous improvement. We have previously reported on the benefits of agencies using systematic methods to collect, verify, store, and disseminate information on acquisitions for use by their current and future employees. However, DHS does not have the data it needs to make such assessments and ensure that, in using other transactions, the benefits outweigh the additional risks. In our 2004 review, we found that S&T lacked the capacity to systematically assess its other transactions, and we recommended that DHS capture knowledge obtained during the acquisition process to facilitate planning and implementing future projects. While the S&T directorate now shares knowledge about the benefits derived from completed projects on an informal basis, DHS does not formally collect or share information about whether other transactions have been successful in supporting projects or what factors led to success or failure. In 2005, DHS hired a consultant to develop a “lessons learned” document based on the DOD’s experience using other transactions, and DHS has incorporated this into its other transactions training. S&T program representatives told us that their programs undergo regular management reviews; however, these reviews are not documented. DHS has not developed a system for capturing knowledge from its own projects, which may limit its ability to learn from experience and adapt approaches going forward. DHS also lacks the information needed to assess whether it is using other transaction authority to effectively negotiate intellectual property and data rights. While some agreements tailored the language on intellectual property and data rights to the particular needs of the project, we found that the language in most agreements was similar and that some of this language is generally applied to FAR-based contracts. For example, most agreements included standard FAR clauses for allocating intellectual property rights, such as giving all ownership of an invention to the contractor while maintaining a paid-up license that allows the government to use the invention; standard FAR language that gives the government the right to require a contractor to grant a license to responsible applicants or grant the license itself if the contractor refuses to do so; requirements for the contractor to submit a final report on the use of the inventions or on efforts at obtaining such use; and a standard data rights clause with an added provision that extends rights to state and local governments. Incorporating these clauses enables DHS to protect the government’s interest, however, the extent to which DHS needed these rights is unclear because the rationale for using these provisions and the anticipated benefits were not documented. Concerned that rights may be overestimated—and ultimately result in the government paying for unused rights and discouraging new businesses from entering into other transaction agreements—DOD issued guidance on intellectual property rights negotiations. We reported that DOD’s guidance called for consideration of factors such as the costs associated with the inability to obtain competition for future production, maintenance, upgrade, and modification of prototype technology, or the inability of the government to adapt the developed technology for use outside the initial scope of the prototype project. DHS’s May 2008 guidance for prototype projects includes similar areas of consideration to assist contracting officers in negotiating these rights, which could help to address this concern if implemented as intended. This guidance also provides that contracting officers, in conjunction with program managers, should obtain the assistance of the DHS Intellectual Property Counsel in assessing intellectual property needs. To better track procurement data from other transaction agreements, DHS has modified its procurement database to capture additional information. For example, DHS recently made changes to its database to allow the user, in part, to identify a prime contractor’s nontraditional status. However, the capacity of the database is limited as it is not designed to capture data to assess DHS’s use of other transactions—particularly on the extent of nontraditional contractors’ contributions. The procurement database is also limited to including information on new and active agreements, so DHS may have missed an opportunity to gather data on experiences from any inactive agreements not included in the database. As of April 2008, at least 10 agreements—almost 20 percent of all the agreements we reviewed—were not in the database. In addition, the database does not contain information on the nature of the work performed by nontraditional contractors—either prime or subcontractors—or the funding allocated to nontraditional contractors. DHS’s guidance only recommends reporting expenditures of government funds if a cost reimbursement agreement is involved or the agreement involves cost- sharing. Most available data on the contributions of nontraditional contractors are maintained in hard copy files, but documentation on 19 of 44 agreements did not contain sufficient information for us to determine the planned obligations for nontraditional contractors. The unique nature of other transaction agreements requires staff with experience in planning and conducting research and development acquisitions, strong business acumen, and sound judgment to enable them to operate in a relatively unstructured business environment. DHS requires its other transaction contracting officers to hold a certification for the most sophisticated and complex contracting activities and to take training on the use of this authority. DHS has created training courses that provide instruction in the use of both FAR-based research and development contracting and other transaction agreements. The topics covered include intellectual property, foreign access to technology created under other transactions, and program solicitations. According to DHS representatives, between January 2005 and March 2008, approximately 80 contracting staff, including contracting officers, had been trained. DHS representatives also said they are developing a refresher course for staff who have already completed the initial training. DHS’s recently issued guidance also requires program staff to take training on other transactions. When DHS first began entering into other transaction agreements in fiscal year 2004, it relied upon contracting services from other agencies, such as the U.S. Army Medical Research Acquisition Activity, including staff who were experienced in executing other transaction agreements. Since fiscal year 2005, DHS has been granting warrants to permit its own contracting officers to enter into other transaction agreements and has issued these warrants to 17 contracting officers. Nine of these contracting officers have been assigned to support S&T; however, DHS has experienced turnover and 4 of these S&T contracting officers have left DHS since February 2008. The Office of Procurement Operations does not have a staffing model to estimate how many contracting officers are needed to support S&T’s workload on an ongoing basis. Two S&T program managers, who each manage one agreement, told us that they had difficultly obtaining assistance from the procurement office for other transactions, and attributed this to inadequate staffing levels and turnover. Our prior work has noted ongoing concerns with regard to the sufficiency of DHS’s acquisition workforce to ensure successful outcomes. In 2003, we recommended that DHS develop a data-driven assessment of the department’s acquisition personnel resulting in a workforce plan that would identify the number, skills, location, and competencies of the workforce. In 2005, we reported on disparities in the staffing levels and workload imbalances among component procurement offices and recommended that DHS conduct a departmentwide assessment of the number of contracting staff. This recommendation has not been implemented. As of February 2008, DHS reported that approximately 61 percent of the minimum required level and 38 percent of the optimal level of contract specialists were in place, departmentwide. We have ongoing work on acquisition workforce issues and initiatives at DHS and plan to report on the results of these efforts in the final product for that engagement. While other transaction agreements can carry the benefit of tapping into innovative homeland security technologies through nontraditional contractors, as they are exempt from federal procurement regulations, they also carry the risk of reduced accountability and transparency if not properly managed. DHS has successfully used its other transaction authority to attract nontraditional contractors to develop innovative technologies to address homeland security needs, and it continues to implement the policies and procedures needed to manage the inherent risks of these agreements. However, DHS continues to lack the resources—in terms of knowledge and workforce capacity—to ensure that its agreements are transparent and maximize their potential benefits. If other transaction authority is made permanent, it will be important for DHS to take a systematic approach to assessing its experience with other transaction authority and identifying and addressing contracting workforce needs. These steps would not only enable DHS to more strategically manage its agreements in the future, they also would provide Congress with useful information on the benefits of the authority. To promote the efficient and effective use by DHS of its other transactions authority to meet its mission needs, we recommend that the Secretary of Homeland Security direct the Under Secretary for Management and the Under Secretary for Science and Technology to take the following two actions: Collect relevant data on other transaction agreements, including the roles of and funding to nontraditional contractors and intellectual property rights, and systematically assess and report to Congress on the use of these agreements to ensure that the intended benefits of the authority are achieved. Direct the Office of Procurement Operations to work with the Science and Technology directorate to determine the number of contracting officers needed to help ensure a sufficient contracting workforce to execute other transaction authority. We provided a draft of this report to DHS for review and comment. In written comments, DHS concurred with our recommendations and provided some information on efforts under way to improve information on its use of other transaction authority. DHS’s comments are reprinted in their entirety in appendix III. DHS also provided technical comments that were incorporated where appropriate. In response to our first recommendation, that DHS collect relevant data on other transactions agreements, including the roles of and funding to nontraditional contractors and intellectual property rights, and systematically assess and report to Congress on the use of these agreements to ensure that the intended benefits of the authority are achieved, DHS stated that the Chief Procurement Officer is taking steps to improve the information DHS has on its other transactions. DHS reiterated changes it has made to its procurement data system which are described in our report. DHS also noted the information included in its annual report to Congress on S&T’s other transactions. For example, the report details the technical objectives of each other transaction including the technology areas in which the project is conducted. DHS also stated that it plans to revise its guidance to specify that the Office of Procurement Operations and S&T program management should formally collaborate in preparing its annual report to Congress, noting that this process can serve as a means of sharing “lessons learned” on the benefits of other transaction authority. While DHS stated that its report to Congress includes overarching assessment information, DHS does not systematically evaluate whether it is obtaining the full benefits of other transaction authority. For example, DHS did not specify how it will improve the availability of and systematically assess information related to the nature of the work being performed by nontraditional contractors, the funding allocated to nontraditional contractors, or areas considered in the negotiation of intellectual property rights. We continue to believe that these are key areas in which DHS should collect and evaluate data to determine whether the intended benefits of the authority are achieved. In response to our second recommendation, that the Office of Procurement Operations work with S&T to determine the number of contracting officers needed to help ensure a sufficient contracting workforce to execute other transaction authority, DHS stated that this issue can only be addressed as part of broader departmentwide acquisition workforce initiatives. DHS recognized the need to have contracting personnel, certified in the use of other transactions, in sufficient numbers to handle S&T’s workload as it arises, but noted that the workload does not lend itself to a static number of personnel. While we recognize that the workload for other transactions fluctuates, the Office of Procurement Operations does not have a staffing model that incorporates workload to estimate what level of contracting support is needed for other transactions on an ongoing basis. We continue to believe that this would help DHS managers ensure a sufficient contracting workforce to execute S&T’s other transaction authority. We are sending copies of this report to interested congressional committees and the Secretary of Homeland Security. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s web site at http://www.gao.gov. If you or your staff have questions regarding this report or need additional information, please contact me at (202) 512-4841 or needhamjk1@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Principal contributors to this report were Amelia Shachoy, Assistant Director; Alexandra Dew; Russ Reiter; Matthew Voit; Tracey Graham; John Krump; and Karen Sloan. To determine the extent to which nontraditional contractors have been involved in other transactions with the Department of Homeland Security (DHS) to fulfill technology and mission needs, we obtained an initial list of agreements from DHS’s Office of Procurement Operations, the contracting office responsible for entering into these agreements; conducted a file review; and interviewed DHS’s Science and Technology (S&T) directorate’s program managers. As shown in table 1, we identified 53 of 55 agreements that we could review. Nontraditional contractors were identified in 44 agreement files, although not all had complete information. For example, 19 of these files did not include sufficient information to determine how much of the contract value was proposed to go to nontraditional contractors. We analyzed all available agreements and the contractors’ proposals to identify the nontraditional contractors, the contribution they plan to bring to the project, and the nontraditional contractors’ shares as identified in contractors’ proposals. However, DHS relies on contractors to self-certify their status as a nontraditional government contractor during agreement negotiation. In analyzing DHS’s agreements, we did not independently verify a contractor’s reported status as a nontraditional contractor other than to conduct a search of the Federal Procurement Data System-Next Generation (FPDS-NG) to determine whether these contractors had prior government work. Our limited review of FPDS-NG identified 25 contractors who had worked with the government in the previous year but found no contract actions that appeared to be subject to the cost accounting standards or that were for prototype or research projects in excess of $500,000. We also did not independently verify the share of costs allocated to nontraditional contractors or their contributions under the agreements. We determined nontraditional contractors’ business size by reviewing data from the Central Contractor Registration. With these data, we identified the business size of 39 of 50 nontraditional contractors. Of the remaining 11 firms, 1 firm did not have a business size identified and 10 were not listed in the database. In addition, we interviewed DHS contracting officers and S&T program managers to obtain their views on the contributions that the nontraditional contractors provided to the project. In addition, we also interviewed two prime contractors, one traditional and one nontraditional, to understand their experiences with entering into other transactions with DHS. To assess DHS’s management of the acquisition process when using other transactions, we reviewed and analyzed each available agreement file to assess the process and procedures used to negotiate and enter into the agreement. We reviewed DHS’s Management Directive 0771.1, Other Transaction Authority, dated July 8, 2005, and Procurement Operating Procedure 311, Other Transactions for Prototypes and the attached Other Transaction for Prototype Guidebook, dated May 22, 2008. We also interviewed contracting officers and program managers as well as a representative from DHS’s legal counsel to obtain an understanding of the review process. We reviewed each available agreement analysis to determine how the intellectual property and data rights were negotiated. We discussed with contracting and program representatives whether information is collected to assess the effectiveness and benefits of the use of other transaction authority or what lessons are learned from its use. We also reviewed DHS’s June 30, 2008, report to Congress on its use of other transaction authority, which includes information on 38 agreements. During the course of our audit work, we reviewed 15 additional agreements, including 1 agreement entered into after DHS’s reporting period. We reviewed DHS’s training material provided to contracting officers on the use of the other transaction authority. We also obtained information on the number of contracting representatives that have received this training and the number of those that have left DHS since 2005. We also reviewed our prior reports on the use of other transaction authority at the Departments of Defense and Homeland Security. We conducted this performance audit from April through September 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Autonomous Rapid Facility Chemical Agent Monitor (ARFCAM) BioAgent Autonomous Network Detector (BAND) Food Biological Agent Detection Sensor (FBADS) Instantaneous Bio-Aerosol Detection Systems (IBADS) Lightweight Autonomous Chemical Identification System (LACIS) Hand-held chemical agent detectors Low-Cost Bio-Aerosol Detection System (LBADS) Portable High-through-put Integrated Laboratory Identification System (PHILIS) Rapid Automated Biological Identification System (RABIS) Counter Man-Portable Air Defense Systems (CMANPADS) S&T Infrastructure Protection & Geophysical Science Division Kentucky Critical Infrastructure Protection Institute (KCI) S&T Homeland Security Advanced Research Projects Agency Prototypes and Technology for Improvised Explosives Device Detection (PTIEDD)
When the Department of Homeland Security (DHS) was created in 2002, it was granted "other transaction" authority--a special authority used to meet mission needs. While the authority provides greater flexibility to attract and work with nontraditional contractors to research, develop, and test innovative technologies, other transactions carry the risk of reduced accountability and transparency--in part because they are exempt from certain federal acquisition regulations and cost accounting standards. In 2004, GAO reported on DHS's early use of this authority. This follow-up report determines the extent to which nontraditional contractors have been involved in DHS's other transactions, and assesses DHS's management of the acquisition process when using this authority to identify additional safeguards. To conduct its work, GAO reviewed relevant statutes, guidance, and prior GAO reports on other transactions, and interviewed contracting and program management officials, as well as contractors. GAO also reviewed 53 files for agreements entered into from fiscal years 2004 through 2008 and identified those involving nontraditional contractors. DHS's other transactions documentation indicates that nontraditional contractors played a significant role in over 80 percent of the Science and Technology directorate's other transaction agreements. GAO identified 50 nontraditional contractors who participated in 44 agreements--one-third of them were prime contractors and about half of them were small businesses. These contractors provided a variety of technologies and services that DHS described as critical--including technology designed to detect chemical warfare agents after a suspected or known chemical attack. The proportion of dollars obligated for nontraditional contractors on an agreement did not necessarily indicate the importance of their contributions. For example, only 1 percent of total agreement obligations were allocated to a nontraditional subcontractor that, according to the prime contractor, was specially qualified for developing tests for a hazardous substance detection system. While DHS has continued to develop policies and procedures for other transactions, including some to mitigate financial and program risks for prototype projects, the department faces challenges in systematically assessing its use of other transactions and maintaining a skilled contracting workforce. DHS issued guidance in 2008 and continued to provide training to contracting staff on the use of other transactions. However, DHS does not track information on the amount of funds paid to nontraditional contractors or the nature of the work they performed, which could help the department assess whether it is obtaining the full benefits of other transaction authority. DHS recently updated its procurement database to capture information on other transaction agreements, but the database does not include all of the data DHS would need to assess nontraditional contractor involvement. Further, DHS's ability to maintain a stable and capable contracting workforce remains uncertain due to high staff turnover and the lack of a staff planning method.
GSA follows a prescribed process for the disposal of federal properties that are reported as excess by federal agencies—a process that can take years to complete. GSA first offers excess property to other federal agencies. If no federal agency needs it or homeless provider expresses an interest in it, the property becomes surplus and may be made available for other uses through a public benefit conveyance, when state and local governments, and certain nonprofits, can obtain the property at up to 100 discount of fair market value when it is used for public purposes, such as an educational facility. Ultimately, the property may be disposed of by a negotiated sale for public use or public sale based on GSA’s determination of the property’s highest and best use. GSA collects rent from tenant agencies, which is deposited in the Federal Buildings Fund (FBF) and serves as GSA’s primary source of funding for operating and capital costs associated with federal real property. Congress exercises control over the FBF through the appropriations process, which designates how much of the fund can be obligated for new construction and maintenance each fiscal year. According to GSA, capital funding has not kept pace with GSA’s need to replace and modernize buildings in its federal real property portfolio, which includes about 1,500 buildings. We have recently found that GSA and other federal agencies have pursued alternative approaches to address challenges with funding federal real property projects. One alternative approach is a swap-construct exchange between the federal government and a nonfederal entity, such as a private developer. GSA has several authorities to exchange federal property for constructed assets and, in 2005, was specifically authorized to exchange federal property for construction services. Swap-construct exchanges can be proposed by a nonfederal entity, such as a private developer or local government, or by GSA. GSA’s process for proposing and conducting a swap-construct exchange includes either proposing a swap-construct exchange to a nonfederal entity that has expressed an interest in acquiring a specific federal property or soliciting market interest through an initial proposal, often an RFI, followed by more detailed proposals. These more detailed proposals include requests for qualifications (RFQ) to identify qualified developers and requests for proposals (RFP). In a swap-construct exchange, the federal government transfers the title of the federal property to a developer or other property recipient after receiving a constructed asset or the completion of construction services at a different location. Swap-construct exchanges can involve swapping property and constructed assets or construction services that are of equal value or can include cash to compensate for a difference in value between the federal property and the asset or services to be received by the government. According to GSA, highest priority is assigned to swap-construct exchanges that involve exchanges of federal property of equal or greater value than the asset or services provided by the property recipient because these scenarios do not require appropriation of federal funding. Figure 1 describes GSA’s decision- making process for proposing swap-construct exchanges and the three scenarios that can result from an agreement for a swap-construct exchange. According to GSA, once the agency has decided to pursue an exchange for a newly constructed asset or services, it follows GSA’s 1997 guidance for real property exchanges of non-excess property. The guidance lays out a number of steps, including: obtaining a property appraisal; using, if possible, one appraiser for all properties involved in an analysis and documentation of all benefits and costs of the exchange to show why the exchange is in the best interest of the government. In November 2013, the GSA Inspector General issued a memo noting that GSA’s 1997 guidance is not specifically applicable to exchanges of real property for services. In responding to the memo, GSA stated that it was in the process of preparing guidance specific to exchanges for services. Since 2000, GSA has completed two swap-construct exchanges initiated by companies—Emory University Hospital Midtown (then called Emory Crawford Long Hospital) and H. E. Butt Store Property Company No. One (HEBSPC)—that were interested in acquiring specific federal properties in Atlanta, GA and San Antonio, TX, respectively. A now-retired representative of Emory University Hospital Midtown and representatives of HEBSPC told us that they were satisfied with the end result of the exchanges, but added that there were challenges in the process that may affect future swap-construct exchanges. Specifically, the representatives told us that the exchanges took longer than anticipated, about 3 years in Atlanta and over 5 years in San Antonio, and that, consequently, less motivated parties may avoid or withdraw from future exchanges. GSA officials told us that both exchanges were a good value for the government because the properties and services received by the government were of equal or greater value than the federal properties disposed of in the exchanges. GSA officials added that the exchanges were a good value for the government because both of the assets disposed of were underutilized. However, these officials noted their lack of experience with swap-construct exchanges at the time. Atlanta Swap-Construct—In 2001, GSA exchanged a federal parking garage (the Summit Garage) in Atlanta with 1,829 spaces on a 1.53-acre parcel with Emory University Hospital Midtown for a newly constructed parking garage (the Pine Street Garage) with 1,150 spaces on .92 acres (see fig. 2). GSA also received a commitment from the hospital to lease and manage the operations and maintenance of the Pine Street Garage for 16 years and to lease spaces in it to federal employees. According to GSA, at the time of the exchange, the Summit Garage was underutilized because it included more parking spaces than GSA needed. Although GSA utilized some of the extra spaces through a lease agreement with the hospital, which is located nearby, the garage was, GSA added, in deteriorating condition and was not it compliance with the Americans with Disabilities Act (ADA). According to GSA, the swap-construct exchange was in the best interest of the government because GSA received a new ADA-compliant garage with a direct covered connection to both the Peachtree Summit Federal building and a Metropolitan Atlanta Rapid Transit Authority (MARTA) subway station in exchange for a garage that was underutilized and in deteriorating condition. GSA added that the exchange was beneficial to the government because it included the hospital’s commitment to lease spaces not needed by the government for 16 years, with proceeds deposited into the FBF, and to cover operations and maintenance work typically covered by GSA. According to the now-retired representative of Emory University Hospital Midtown who was involved with the swap-construct exchange, the acquisition of the Summit Garage was crucial to accommodating a hospital renovation and expansion project. However, the hospital was aware that GSA needed parking spaces to accommodate federal tenants in the Peachtree Summit Federal Building, so it proposed the swap- construct approach to GSA. The representative added that, although the hospital was pleased with the end result of the transaction, the exchange took about 3 years to complete, a time that was longer than anticipated for the hospital and that may lead less motivated parties to avoid or withdraw from future exchanges. GSA officials noted that the agency had limited experience with this type of exchange, which may have contributed to the length of time required to complete it. The representative added that the exchange was also complicated in that the appraised value of the new garage and any additional services had to be equal to the appraised value of the Summit Garage. The now-retired representative added that to address concern that the new and smaller garage might appraise for less than the Summit Garage, the hospital agreed with GSA to continue leasing spaces in the new garage and cover operations and maintenance costs. As a result, the two parts of the exchange—the garages and lease agreements—were equally appraised at $6.6 million. According to GSA, although the hospital’s lease in the new garage expires in 2017, the size of the garage allows the agency to meet continued demand for federal parking in the vicinity of the Peachtree Summit Federal Building. San Antonio Swap-Construct—In 2012, GSA exchanged an approximately 5-acre federal property (the Federal Arsenal site) in San Antonio, TX, with HEBSPC for construction of a parking garage on existing federal land for the recently renovated Hipolito F. Garcia Federal Building and U.S. Courthouse (see fig. 3). According to GSA, at the time of the exchange, the Federal Arsenal site was an underutilized asset because of historical covenants limiting the ability to redevelop the land and its buildings and because it was located on the periphery of the city away from other federal assets. Although the property was partly utilized by GSA’s Fleet Management and through a lease agreement with HEB Grocery Company (HEB) for parking spaces, GSA officials told us that there was no anticipated long-term government need for it. According to GSA officials, the swap-construct exchange was in the best interests of the federal government because the government received a new federal parking garage for the Hipolito F. Garcia Federal Building and U.S. Courthouse in exchange for a property that was underutilized. HEBSPC representatives told us that the company was interested in acquiring the Federal Arsenal site to accommodate existing space needs and potential expansion of HEB’s corporate headquarters, near the site and expressed this interest to GSA. The representatives added that although the historic covenants on the property presented some potential challenges, the company had prior experience renovating and utilizing historic properties on HEB’s headquarters property. The representatives also told us that the company had a long-standing interest in acquiring the Federal Arsenal site prior to 2005, but during that time, the property could not be sold because it was being partly used by GSA’s Fleet Management. In 2005, however, GSA told HEBSPC about the need for an additional parking for the Hipolito Garcia Federal Building and U.S. Courthouse and, subsequently, proposed the swap-construct exchange to HEBSPC, which had experience building parking garages. An official from one of the tenant agencies in the federal building and courthouse told us that the increased availability of parking with the new garage (150 new spaces compared with 35 existing spaces) was one of the reasons the agency decided to locate in the building. GSA officials told us that the availability of the new parking spaces is critical to further attracting tenants to the building, which is not fully occupied. HEBSPC representatives told us the company was pleased with the transaction and GSA’s management of the transaction. However, they added that they would have preferred it to have been completed quicker than the 5-plus years between the proposal and exchange of properties, and noted that the time it took to complete the transaction may lead less motivated parties to avoid or withdraw from such exchanges. According to GSA officials, the transaction took longer than anticipated because GSA did not have significant experience to use as a basis for completing the transaction and because of fluctuations in real estate values due to the economic recession that required additional property appraisals to be completed. After four property appraisals between 2007 and 2009, GSA and HEBSPC ultimately valued the Federal Arsenal site at $5.6 million. According to GSA, the new parking garage was constructed to fully utilize the $5.6 million value of the property that HEBSPC received. Since August 2012, GSA has proposed six swap-construct exchanges— one that the agency proposed directly to the City of Lakewood, CO, and five in which GSA solicited market interest in exchanging federal property, totaling almost 8-million square feet, for construction services or newly constructed assets. After reviewing responses to these six proposals, GSA is actively pursuing three, including: (1) a potential exchange of undeveloped federal land in Denver with the City of Lakewood for construction services at the Denver Federal Center; (2) a potential exchange of the existing FBI headquarters building for a new FBI headquarters building; and (3) a potential exchange of two federal buildings in the Federal Triangle South area of Washington, D.C., for construction services to accommodate federal workers elsewhere in the city. According to GSA officials, although the agency has had authority to exchange property for construction services since fiscal year 2005 and had authority to exchange property for newly constructed assets prior to that, until recently there has been limited agency interest using non- traditional property disposal and acquisition approaches, such as swap- construct exchanges. The officials added that since 2012 the agency has more widely pursued swap-construct exchanges to address challenges such as a rising number of agency needs and limited budgetary resources. According to GSA officials, although the projects could involve exchanges of equal value, similar to the Atlanta and San Antonio exchanges, they could result in the government either receiving a payment or paying to cover any difference in value between the property to be exchanged and its construction projects. GSA decided to propose a swap-construct exchange to the City of Lakewood because the city had previously expressed interest in the undeveloped federal land, totaling about 60 acres, and because GSA had need for construction services at the nearby Denver Federal Center. A representative of the City of Lakewood told us that the city was supportive of the swap-construct approach because the services provided to GSA would support employment for the local population, whereas if the city were to purchase the property through a sale, the proceeds would not necessarily be spent locally. GSA told us that negotiations for a possible swap-construct exchange are ongoing. We found that respondents expressed openness or interest in the swap- construct approach regarding four of the five exchanges for which GSA solicited market interest, but generally this openness or interest was limited to the proposed consolidation of the FBI’s headquarters operations into a new location in exchange for the existing FBI headquarters building and land. Several responses to GSA’s RFIs did not address swap- construct and instead provided other information, such as the credentials of a particular developer and statements that GSA should ensure that affordable housing is included in the redevelopment of federal properties to be exchanged. Figure 4 describes swap-construct exchanges for which GSA solicited market interest and responses to its RFIs. For the proposed FBI headquarters swap-construct exchange, GSA officials told us that the agency anticipates identifying qualified developers by fall 2014 and awarding a contract to a developer for the transaction in summer 2015. For the proposed swap-construct exchange involving Federal Triangle South properties, GSA narrowed the scope of its proposed exchange after reviewing responses to its RFI. Specifically, in April 2014, the agency issued an RFQ to identify qualified developers for a potential exchange involving two of the five properties included in the RFI—the Cotton Annex and the GSA Regional Office Building—for renovations to GSA’s headquarters building and construction services to support the Department of Homeland Security’s headquarters consolidation in Washington, D.C. GSA officials told us that there was little or no market interest in potential swap-construct exchanges in Baltimore, MD (the Metro West building) and Miami, FL (the David W. Dyer Courthouse), and that different approaches were now being considered to address them. In addition, although GSA received some interest in a swap-construct exchange involving another property, the U.S. Courthouse at 312 N. Spring Street in Los Angeles (hereafter referred to as “the Spring Street Courthouse”), GSA officials said the agency may need to pursue other approaches for this property as well. The respondents to these potential exchanges expressed various concerns. For example, 4 of 9 respondents expressed concerns about the lack of detail regarding what GSA would expect in return for the federal property and 4 of 9 respondents expressed concerns about the amount of investment needed in the federal properties to make the exchange profitable for the property’s recipient. Three RFI respondents and representatives of one nongovernmental organization familiar with GSA’s real property projects added that swap- construct may be a less viable approach in markets with a large number of alternative real estate options. According to developers and organizations familiar with GSA’s swap- construct proposals, the two exchanges for which GSA solicited market interest and is still pursuing generally benefit from the inclusion of federal properties located in an area with high real-estate values and, thus, profitable redevelopment potential. Specifically, both properties are located in areas of Washington, D.C., near mass transit and prominent landmarks (see fig. 5). In addition, one of the potential projects—the consolidation of the FBI headquarters operations into a new location— benefits from a well defined scope with GSA’s expectations for the construction priority being sought by the agency in exchange for the federal property considered for exchange in the proposal—the J. Edgar Hoover Building. In 2011, GSA estimated that a new FBI headquarters built on federal land would cost about $1.9 billion. According to GSA, this estimate is out of date. GSA officials told us that swap-construct exchanges can help GSA facilitate construction projects given a growing need to modernize and replace federal properties, shrinking federal budgets, and challenges getting funding appropriated from the FBF. Specifically, GSA officials noted that swap-construct exchanges allow GSA to immediately apply the value of a federal property to be used in the exchange to construction needs, rather than wait for funds to be made available from the FBF. GSA officials and a representative of a nongovernmental organization familiar with GSA’s real property projects added that the exchanges can be attractive for GSA because the agency can get construction projects accomplished without having to request full upfront funding for them from Congress. In addition, because swap-construct exchanges require developers or other property recipients to address GSA’s construction projects prior to the transfer of the title to the exchange property, federal agencies can continue to occupy the federal property during the construction process, eliminating the need for agencies to lease or acquire other space to occupy during the construction process. GSA officials also told us that swap-construct exchanges can help advance a government-wide goal to consolidate agencies out of leased space into federally owned space. For example, according to GSA, about half of the FBI’s headquarters staff are located in the existing headquarters building and the potential swap-construct exchange for a new FBI headquarters could allow the agency to consolidate into one federally owned building. The retired Emory University Hospital Midtown representative and HEBSPC representatives added that swap-construct exchanges can help the private sector acquire federal property that it otherwise may not be able to acquire. While swap-construct can facilitate GSA’s construction needs, it could come at a greater cost to some stakeholders than the traditional disposal approach. Specifically, because federal properties disposed of through swap-construct are not declared excess or surplus (often because they are still in use by federal tenants when the swap-construct is proposed and during the exchange process), they do not go through the traditional disposal process. Thus, the swap-construct approach may limit the participation of nonfederal entities that would have been interested in acquiring the properties through public benefit conveyance or other means. For example, in a typical property disposal, eligible public and nonprofit entities, such as institutions of higher education or homeless organizations, can receive the federal property at up to a 100 percent discount of fair market value when it is used for a variety of qualified purposes, such as education and assistance for the homeless. Two institutions of higher education that responded to GSA’s solicitations for a swap-construct exchange expressed a preference for GSA to use the traditional disposal process because the universities could then obtain it by public benefit conveyance. A representative of a national advocacy group for the homeless expressed concern that swap-construct could serve as a way around the traditional disposal process and believes GSA should offer public benefit conveyances prior to proposing swap-construct exchanges. Swap-construct exchanges require developers to make potentially large investments in federal construction projects prior to receiving title to federal property used in the exchanges. GSA’s solicitations for market interest in swap-construct projects do not always clearly identify what projects the agency is seeking in exchange for the federal property. For example, the RFIs for the potential Dyer Courthouse and Metro West swap-construct exchanges did not specify what GSA was seeking as part of an exchange. Two respondents to the Metro West RFI told us that additional details regarding what GSA expects in return for the property would be key to future consideration of a swap-construct exchange. In addition, one developer we spoke to told us that the lack of detail regarding what GSA expected in return for the Metro West property influenced his company’s decision not to respond to the RFI. One of the four respondents to the Spring Street Courthouse RFI added that although GSA specified a need for a new building in exchange for the Spring Street Courthouse, it was not clear that the new building was a GSA priority. Specifically, the respondent noted that future swap- construct exchanges may benefit from additional information on GSA’s needs, such as a strategic plan for a region where GSA is proposing a swap-construct exchange. GSA officials also told us that the agency does not always identify its needs prior to releasing its RFIs for swap-construct exchanges. OMB guidance notes that although federal agencies should not specify requirements too narrowly in RFIs, agencies should identify clear agency needs in the documents. Leading practices also note the importance of identifying an agency’s needs and being transparent about these needs. GSA officials acknowledged that while details were not always specified in RFIs for swap-construct exchanges, details would be specified in subsequent solicitations if GSA determines there is enough market interest based on the RFI responses. GSA officials also stated that fewer details were included in the RFIs because the agency wanted to gauge market interest in the swap-construct transaction structure and did not want to limit the creativity of potential RFI respondents. However, by not providing some detail on the agency’s needs in its RFIs, GSA risks limiting respondents’ ability to provide meaningful input and could miss potential swap-construct opportunities for the properties. GSA has generated interest in swap-construct for some projects, as previously discussed, but several factors may limit the applicability of the agency’s approach. Three of the four RFI respondents and one of the two nongovernmental organizations we spoke to noted that the federal property to be exchanged should have high redevelopment potential to offset the developers’ risk of delayed access to the property until providing GSA with its needed asset or construction services. Specifically, a developer may have to expend significant time and money addressing GSA’s needs for a new building or renovating an existing federal building before receiving, redeveloping, and generating revenue from the swapped federal property. GSA officials told us that it might be possible to negotiate some early rights of access to the federal property before the transfer of the property title to conduct activities such as site preparation and demolition work, but at a developer’s risk. According to representatives of the two nongovernmental organizations we spoke to, GSA should also consider local market conditions in deciding if a property is suitable for swap-construct because developers can often purchase or lease similar properties they need from the private sector and quickly access them for redevelopment. For example, a representative of a firm that advises developers noted that the FBI headquarters building is located in an area of Washington, D.C., with high potential for profitable redevelopment and that there are few other similar properties available to developers. In contrast, a Metro West RFI respondent and a Spring Street Courthouse RFI respondent expressed concern that the federal properties included in those exchanges, in Baltimore and Los Angeles, respectively, may not have sufficient redevelopment potential to offset the risks associated with delayed transfer of title under a swap-construct approach. Potential complications with exchanging property in one region for a constructed asset or construction services in another region may also limit the applicability of swap-construct exchanges. Specifically, GSA officials told us that the pool of potential bidders is smaller and community and political opposition can be higher when removing federal assets from one region for a constructed asset or construction services in another. In addition, the officials said project management can be more difficult for GSA when an exchange is executed across different regions. Consequently, the officials told us they try to locate the desired constructed asset or construction services in the same region as the federal property to be exchanged. GSA officials added that many underutilized federal properties are not suitable for swap-construct because they are in locations where GSA has limited needs for new assets or construction services or because the federal properties are not sufficiently desirable or would require too much investment from a developer. A representative of the firm that advises developers added that while the swap-construct approach gives GSA greater control over the proceeds from a property disposal, the federal government may get a better deal for a new asset or construction services and potentially larger proceeds for the disposed federal property if it were to use traditional acquisition and disposal methods. In particular, the representative noted that developers may be willing to pay more for federal property through a sale because the developers could gain immediate access to the property for redevelopment purposes. Similarly, the representative told us that GSA may get a better deal on a new asset or construction services it if were to pursue them through a traditional acquisition process because it would invite more developer competition into the process, unlike in a swap- construct approach where a developer would also need to be willing to receive federal property as consideration. While GSA has guidance for determining if it should continue to pursue an exchange that has already been proposed, it does not have criteria to help determine when the agency should solicit interest in a swap- construct exchange. According to GSA officials, the agency considers possible swap-construct exchanges on a case-by-case basis during its annual review of its entire federal real property portfolio, but it lacks guidance on how that case-by-case analysis should be conducted. GSA officials added that because the agency only recently started using the swap-construct approach, it does not have screening criteria for determining when a swap-construct exchange should be proposed. Moreover, we found that some proposed swap-construct exchanges have been driven by GSA’s need to dispose of specific federal properties and that, as previously discussed, GSA has not given the same amount of consideration to construction projects to include in its proposed exchanges. For example, in the Metro West and Dyer Courthouse swap- construct proposals, GSA identified federal properties to be exchanged, but little or no information on construction projects it needed in a potential exchange. GSA has proposed swap-construct exchanges since 2012 to a mixed reception, as previously noted, with little or no interest in exchanges involving the Dyer Courthouse in Miami, the Spring Street Courthouse in Los Angeles, and the Metro West building Baltimore, and high level interest in an exchange only for the FBI headquarters consolidation project. Both OMB and GAO guidance emphasize the importance of using criteria to make capital-planning decisions. By not using screening criteria to identify potentially successful swap-construct exchanges, the agency may miss the best opportunities to leverage swap-construct exchange or select properties for exchange that are better suited to the traditional property disposal process and construction projects that are better suited to traditional funding processes. GSA may also waste time and money pursuing a potential swap-construct exchange that could be better spent pursuing these traditional approaches. GSA faces some key challenges in managing its federal real property portfolio, especially in disposing of unneeded federal property and financing the replacement or modernization of aging and underutilized properties. In some cases, the swap-construct approach discussed in this report might be a useful means through which GSA can more readily achieve these property-related goals. However, GSA’s recent solicitations for market interest in swap-construct have not always been well received by potential bidders. Specifically, of the five swap-construct exchanges GSA for which GSA solicited market interest since 2012, only two are being actively pursued; the others generated little market interest. One concern for potential bidders was the lack of detail regarding the construction services that GSA hoped to gain in return for an asset it would cede to the bidder. We found that in developing initial proposals for a swap-construct exchange GSA often focused on identifying assets to dispose of and gave less attention to what it needed in exchange for those assets. Construction services or a newly constructed asset are fully half of any swap-construct exchange, yet GSA has not always clearly identified its needs when requesting feedback from potential bidders. The agency’s intent may be to provide greater details at later stages of the proposal process, but this approach may limit the ability of respondents to provide meaningful input and lead to missed swap-construct opportunities for GSA. At present GSA does not have criteria for identifying viable exchanges in the sense that both sides of the potential transaction are fully defined and communicated to potential interested parties. OMB and GAO have previously identified the importance of criteria in making agency decisions. By not using screening criteria to make its choices, GSA may be pursuing swap-construct exchanges with less potential for success, and potentially delaying time that it could be spending on traditional disposal and appropriation processes. Similarly, GSA may also miss opportunities to leverage swap-construct more widely moving forward, which could be crucial given ongoing budgetary challenges. In order to identify potentially successful swap-construct exchanges during GSA’s review of its federal real property portfolio and reduce uncertainty for those responding to GSA’s solicitations for possible swap- construct exchanges, we recommend that the Administrator of GSA take the following two actions: 1. include, to the extent possible, details on what GSA is seeking in exchange for federal property in its solicitations, including requests for information, for potential swap-construct exchanges and 2. develop criteria for determining when to solicit market interest in a swap-construct exchange. We provided a draft of this report for review and comment to GSA. GSA concurred with the report’s recommendations and provided additional information on the proposed swap-construct exchange with the City of Lakewood, Colorado, which we incorporated. GSA’s letter is reprinted in appendix II. As arranged with your offices, unless you publicly disclose the contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of the report to the Administrator of GSA. Additional copies will be sent to interested congressional committees. We will also make copies available to others upon request, and the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. Our objectives were to determine (1) GSA’s experiences with completed swap-construct exchanges; (2) the status of GSA’s proposed swap- construct exchanges; and (3) the potential benefits of swap-construct exchanges and the factors that can influence their future use. We described GSA’s swap-construct process using information gathered from GSA guidance and interviews with GSA officials. In addition, we reviewed related laws that facilitate GSA’s swap-construct exchanges. To determine GSA’s experience with swap-construct exchanges, we identified and reviewed the two swap-construct exchanges (Atlanta, GA, and San Antonio, TX) completed by GSA since 2000 through GSA exchange agreement documentation, appraisal reports, and property descriptions, and through interviews with GSA officials. We conducted site visits to Atlanta and San Antonio, examined the properties involved in the exchanges, and interviewed GSA officials and nonfederal participants—H. E. Butt Store Property Company No. One (HEBSPC) and Emory University Hospital Midtown—about their experience with the transactions. To determine the status of GSA’s proposed swap-construct exchanges, we identified and reviewed the six proposed swap–construct exchanges—two in Washington, D.C., and one each in Miami, FL; Los Angeles, CA; Baltimore, MD; and Lakewood, CO—using GSA documentation, including GSA solicitations for possible exchanges, known as requests for information (RFI), and through interviews with GSA officials. We conducted site visits to three of the properties involved in the proposed exchanges (the Cotton Annex and Regional Office Building in Washington, D.C and the Metro West building in Baltimore, MD), examined the properties, and spoke with GSA officials about the RFIs that included these properties. We selected these properties based on nearby proximity (within a 50-mile radius) and to include a site visit to both a location where the property or properties in the RFI generated 10 or more responses and to a location were the property or properties in the RFI generated fewer than 10 responses. To further identify a property or properties to visit, we then limited our selection to property or properties that were furthest along in GSA’s proposed swap-construct process. In addition, to better understand the status of these proposed exchanges, we analyzed the responses GSA received to its solicitations for these swap-construct exchanges and discussed the proposed exchanges with four of the seven respondents to the Metro West and Spring Street Courthouse RFIs. We did not interview RFI respondents to the proposed swap-construct exchanges that involved the FBI headquarters and Federal Triangle South properties since GSA is actively in discussions or negotiations with these respondents. We selected our sample of the respondents to include a variety of respondents, including a development company, firm that advises developers, a university, and a company that provides property management services to the government. Because the RFI respondents were selected as a nonprobability sample, the information gained in these interviews cannot be generalized to make conclusions about all of GSA’s swap-construct exchanges. However, they illustrate the views of a diverse set of respondents with experience related to these exchanges. To understand the possible exchange in Lakewood, CO, we analyzed GSA documents, including agency property descriptions and tentative plans for the swap-construct exchange, and interviewed GSA officials and a local government official involved with the negotiations with GSA. To identify the potential benefits of swap-construct exchanges and factors that can influence GSA’s future use these exchanges, we evaluated GSA’s approach to identifying potentially successful swap-construct exchanges to propose against the OMB Capital Programming Guide and the GAO Executive Guide on Leading Practices in Capital Decision- Making, and interviewed GSA officials; nonfederal participants in completed swap-construct exchanges (HEBSPC and Emory University Hospital Midtown); stakeholders in federal property acquisition and disposal processes (the National Capital Planning Commission and the National Law Center for Homelessness and Poverty, respectively); and nongovernmental organizations familiar with GSA’s swap-construct exchanges (the National Council for Public-Private Partnerships and the Urban Land Institute). In addition, we analyzed written responses GSA received to its solicitations for proposed swap-constructs exchanges and information from interviews we conducted with the four respondents, described above, to identify any factors that may affect GSA’s future use of swap-construct exchanges. We conducted this performance audit from September 2013 to July 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Keith Cunningham, Assistant Director; Amy Abramowitz; Dawn Bidne; Timothy Guinane; James Leonard; Sara Ann Moessbauer; Josh Ormond; and Crystal Wesco made key contributions to this report. Capital Financing: Alternative Approaches to Budgeting for Federal Real Property. GAO-14-239. Washington, D.C.: March 12, 2014. Federal Real Property: Excess and Underutilized Property Is an Ongoing Challenge. GAO-13-573T. Washington, D.C.: April 25, 2013. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. Federal Courthouses: Recommended Construction Projects Should Be Evaluated under New Capital-Planning Process. GAO-13-263. Washington, D.C.: April 11, 2013. Federal Buildings Fund: Improved Transparency and Long-term Plan Needed to Clarify Capital Funding Priorities. GAO-12-646. Washington, D.C.: July 12, 2012. Federal Real Property: National Strategy and Better Data Needed to Improve Management of Excess and Underutilized Property. GAO-12-645. Washington, D.C.: June 20, 2012. Federal Real Property: The Government Faces Challenges to Disposing of Unneeded Buildings. GAO-11-370T. Washington, D.C.: February 10, 2011. Federal Courthouse Construction: Estimated Costs to House the L.A. District Court Have Tripled and There Is No Consensus on How to Proceed. GAO-08-889. Washington, D.C.: September 12, 2008. Federal Real Property: Most Public Benefit Conveyances Used as Intended, but Opportunities Exist to Enhance Federal Oversight. GAO-06-511. Washington, D.C.: June 21, 2006. Executive Guide: Leading Practices in Capital Decision-Making. GAO/AIMD 99-32. Washington, D.C.: December 1, 1998.
To help address challenges in federal real-property management, including the growing need to replace and modernize federal buildings, GSA has proposed expanding its use of swap-construct exchanges. GSA has proposed this approach for some potentially large projects, including replacing the FBI's headquarters. GAO was asked to review issues related to these exchanges. This report addresses: (1) GSA's experience with completed swap-construct exchanges; (2) the status of GSA's proposed swap-construct exchanges; and (3) the potential benefits of these exchanges and factors that can influence their future use. GAO reviewed documents, including GSA's solicitations for swap-construct exchanges, appraisals of completed exchanges, and OMB and GAO guidance. GAO conducted site visits to the completed swap-construct sites and three proposed swap-construct sites, selected based on location, number of responses to GSA's solicitation, and stage in the swap-construct process, and interviewed GSA officials and nonfederal participants in the exchanges. Since 2000, the General Services Administration (GSA) has completed two “swap-construct” exchanges—transactions in which the agency exchanges title to federal property for constructed assets or construction services, such as renovation work—in response to private sector interest in specific federal properties. In both completed exchanges, GSA used the value of federal properties it determined were underutilized to acquire new parking garages. The recipients of the federal properties told us that the exchanges took longer than expected (about 3 years for one of the exchanges and 5 years for the other). In response, GSA noted its lack of experience with swap-construct exchanges at the time. Since 2012, GSA has proposed six swap-construct exchanges. After reviewing responses to its solicitations, GSA is actively pursuing three, including a potential exchange of the existing Federal Bureau of Investigation's (FBI) headquarters for construction of a new FBI headquarters building. Respondents to the three solicitations that GSA is not actively pursuing noted concerns, including the amount of investment needed in the federal properties and the lack of detail regarding GSA's construction needs in an exchange. Swap-construct can result in an exchange of equally valued assets or services or can result in the government or a property recipient paying for a difference in value. The swap-construct approach can help GSA address the challenges of disposing of unneeded property and modernizing or replacing federal buildings, but various factors could affect future use of the approach. For example, swap-construct can require developers to spend large sums on GSA's construction needs before receiving title to the federal property used in the exchanges. GSA's solicitations have not always specified these construction needs. Consequently, developers may be unable to provide meaningful input, and GSA could miss swap-construct opportunities. Further, the viability of swap-construct exchanges may be affected by specific market factors, such as the availability of alternative properties. However, GSA lacks criteria to help determine if the agency should solicit interest in a swap-construct exchange. As a result, GSA could miss opportunities to use swap-construct or select properties and construction projects better suited to traditional disposal and funding processes. Office of Management and Budget (OMB) and GAO guidance emphasize the importance of criteria in making capital-planning decisions and providing clarity on construction needs. GAO recommends that GSA (1) include, to the extent possible, details on what GSA is seeking in exchange for federal property in these solicitations and (2) develop criteria for determining when to solicit market interest in swap-construct exchanges. GSA agreed with GAO's recommendations.
Senior executives in the successful organizations we studied were personally committed to improving the management of technology. The PRA and the Clinger-Cohen Act make federal agency heads directly responsible for establishing goals and measuring progress in improving the use of information technology to enhance the productivity and efficiency of their agency’s operations. To help them with their major information management responsibilities, the reform legislation directs the heads of the major agencies to appoint CIOs. The legislation assigns a wide range of duties and responsibilities to CIOs, foremost of which are working with the agency head and senior program managers to implement effective information management to achieve the agency’s strategic goals; helping to establish a sound investment review process to select, control, and evaluate spending for information technology; promoting improvements to the work processes used by the agency to carry out its programs; increasing the value of the agency’s information resources by implementing an integrated agencywide technology architecture; and strengthening the agency’s knowledge, skills, and capabilities to effectively manage information resources, deal with emerging technology issues, and develop needed systems. While there are various approaches on how best to use the CIO position to accomplish these duties, the legislative requirements, OMB guidance, and our best practices experience with leading organizations define common tenets for the CIO position. An agency should place its CIO at a senior management level, working as a partner with other senior officials in decision-making on information management issues. Specifically, agencies should appoint a CIO with expertise and practical experience in technology position the CIO as a senior partner reporting directly to the agency head; ensure that the CIO’s primary responsibilities are for information have the CIO serve as a bridge between top management, line management, and information management support professionals, working with them to ensure the effective acquisition and management of the information resources needed to support agency programs and missions; task the CIO with developing strategies and specific plans for the hiring, training, and professional development of staff in order to build the agency’s capability to develop and manage its information resources; and support the CIO position with an effective CIO organization and management framework for implementing agencywide information technology initiatives. Having effective CIOs will make a real difference in building the institutional capacity and structure needed to implement the management practices embodied in the broad set of reforms set out in the PRA and the Clinger-Cohen Act. The CIO must combine a number of strengths, including leadership ability, technical skills, an understanding of business operations, and good communications and negotiation skills. For this reason, finding an effective CIO can be a difficult task. Agencies faced a similar difficulty in trying to find qualified chief financial officers to implement the CFO Act’s financial management reforms. It took time and concerted effort by the Administration, the CFO Council, and the Congress to get strong, capable leaders into the CFO positions. Shortly after the Clinger-Cohen Act went into effect, OMB evaluated the status of CIO appointments at the 27 agencies. OMB noted that at several agencies, the CIO’s duties, qualifications, and placement met the requirements of the Clinger-Cohen Act. According to OMB, these CIOs had experience, both operationally and technically, in leveraging the use of information technology, capital planning, setting and monitoring performance measures, and establishing service levels with technology users. These CIOs also had exposure to a broad range of technologies, as well as knowledge of government budgeting and procurement processes and information management laws, regulations, and policies. However, OMB had concerns about a number of other agencies that had acting CIOs, CIOs whose qualifications did not appear to meet the requirements of the Clinger-Cohen Act, and/or CIOs who did not report directly to the head of the agency. OMB also raised concerns about agencies where the CIOs had other major management responsibilities or where it was unclear whether the CIOs’ primary duty was the information resource management function. OMB stated that it would reevaluate the situations at these agencies at a later date, after agencies had time to put permanent CIOs in place or take corrective actions to have their CIO appointment and organizational alignment meet the necessary requirements. OMB called for updated information on the status of governmentwide CIO appointments in its April 1997 data request on individual agency efforts to implement provisions of the Clinger-Cohen Act. OMB has not yet issued a status report based on this information and subsequent follow-up. In a recent discussion, OMB officials stated that they will provide feedback on individual CIO appointments as part of the fiscal year 1999 budget review process. On the basis of preliminary observations, however, OMB officials stated that they still have some of the same concerns that they had a year ago about CIO positions that have not been filled, have not been properly positioned, or have multiple responsibilities. It is very important for OMB to follow through on its efforts to assess CIO appointments and resolve outstanding issues. Information technology reforms simply will not work without effective CIO leadership in place. We will continue to monitor this situation to provide our suggestions on actions that need to be taken. One area that we will focus on during the coming year is CIOs who have major responsibilities in addition to information management. The Clinger-Cohen Act clearly calls for CIOs to have information resources management as their primary duty. We have stressed the importance of this principle in testimonies and, most recently, in our February 1997 high-risk report, in which we emphasized that the CIO’s duties should focus sharply on strategic information management issues and not include other major responsibilities. In addition to the escalating demands of rapidly evolving technologies, CIOs are faced with many serious information management issues, any one of which would be a formidable task to address. Taken together, these issues create a daunting body of work for any full-time CIO, much less for one whose time and attention is divided by other responsibilities. As you know, Mr. Chairman, we have reported extensively on a number of these compelling challenges. The following are just a few of these challenges. Ensuring that federal operations will not be disrupted by the Year 2000 problem is one of the foremost and most pressing issues facing agencies—one that we have designated as a governmentwide high-risk area. Efforts by this Subcommittee have underscored repeatedly that many agencies are seriously behind schedule in resolving this problem during the next 2 years. Poor security management is putting billions of dollars worth of assets at risk of loss and vast amounts of sensitive data at risk of unauthorized disclosure, making it another of our governmentwide high-risk areas. Agencies need to make much better progress in designing and implementing security programs and getting skilled staff in place to manage them. This extreme vulnerability has been given added emphasis by the recent Presidential commission report on the growing exposure of U.S. computer networks to exploitation and terrorism. Agencies need to develop, maintain, and facilitate integrated systems architectures to guide their system development efforts. We have seen major modernization efforts handicapped by incomplete architectures, such as at the Federal Aviation Administration (FAA) and the Internal Revenue Service (IRS), as well as the departments of Veterans Affairs and Education. Agencies need to establish sound information management investment review processes that provide top executives with a systematic, data-driven means to select and control how technology funds are spent. Our reviews of system development and modernization projects, such as the Medicare Transaction System and the four high-risk efforts included in our 1997 High-Risk Series, continue to show the crucial importance of structured investment oversight.In our 1997 High-Risk Series we identified 25 high-risk areas covering a wide array of key federal activities, ranging from Medicare fraud to financial management at the Department of Defense. Resolving the problems in these areas depends heavily on improved information management. Agencies need to integrate strategic information planning with the overall strategic plan that they must prepare under the Results Act. Our review of recent attempts by agencies to develop sound strategic plans showed very weak linkages between the strategic goals and the information technology needed to support those goals. Agencies must build their staffs’ skills and capabilities to react to the rapid developments in information technology, develop needed systems, and oversee the work of systems contractors. Weaknesses in agencies’ technology skills bases, especially in the area of software acquisition and development, have been a recurring theme in our reviews of federal information technology projects. Despite the urgent need to deal with these major challenges, we still see many instances of CIOs who have responsibilities beyond information management. At present, only 12 agencies have CIOs whose responsibilities are focused solely on information management. The other 15 agencies have CIOs with multiple responsibilities. Together, these 15 agencies account for about $19 billion of the nearly $27 billion dollars in annual federal planned obligations for information technology. While some of these CIO’s additional responsibilities are minor, in many cases they include major duties, such as financial operations, human resources, procurement, and grants management. At the Department of Defense, for example, the CIO is also the Assistant Secretary for Command, Control, Communications and Intelligence. By asking the CIO to also shoulder a heavy load of programmatic responsibility, it is extremely difficult, if not impossible, for the CIO to devote full attention to information resource management issues. Recognizing this problem, the Department’s Task Force on Defense Reform is examining the current structure of the CIO position to ensure that the person can devote full attention to reforming information management within the Department. We are particularly troubled by agencies that have vested CIO and Chief Financial Officer responsibilities in one person. The challenges facing agencies in both financial and information management are monumental. Each requires full-time leadership by separate individuals with appropriate talent, skills, and experience in these two areas. In financial management, for example, most agencies are still years away from their goal of having reliable, useful, relevant, and timely financial information—an urgently needed step in making our government fiscally responsible. Because it may be difficult for the CIO of a large department to adequately oversee and manage the specific information needs of the department’s major subcomponents, we have also supported the establishment of a CIO structure at the subcomponent and bureau levels. Such a management structure is particularly important in situations where the departmental subcomponents have large information technology budgets or are engaged in major modernization efforts that require the substantial attention and oversight of a CIO. In the Conference Report on the Clinger-Cohen Act, the conferees recognized that agencies may wish to establish CIOs for major subcomponents and bureaus. These subcomponent level CIOs should have responsibilities, authority, and management structures that mirror those of the departmental CIO. We have reported on instances where the subcomponent CIOs were not organizationally positioned and empowered to discharge key CIO functions. For example, in our reviews of FAA’s air traffic control (ATC) modernization, which is expected to cost $34 billion through the year 2003, we found that FAA’s CIO was not responsible for developing and enforcing an ATC systems architecture. Instead, FAA had diffused architectural responsibility across a number of organizations. As a result, FAA did not have a complete ATC architecture, which in turn has led to incompatible and unnecessarily expensive and complex ATC systems. Additionally, we found that while FAA’s CIO was responsible for ATC software acquisition process maturity and improvement, the CIO lacked the authority to implement and enforce process change. Consequently, we reported that (1) FAA’s processes were ad hoc, and sometimes chaotic, and not repeatable across ATC projects and (2) its improvement efforts have not produced more disciplined processes. Among other actions, we recommended that FAA establish an effective management structure for developing, maintaining, and enforcing a complete systems architecture and improving software acquisition process improvement and that this management structure be similar to the department-level CIO structure prescribed by the Clinger-Cohen Act. Similarly, in the last few years, we have reported and testified on management and technical weaknesses associated with IRS’ Tax Systems Modernization. Among other things, we have noted how important it is for IRS to have a single IRS entity with responsibility for and control over all information systems efforts. Since we first reported on these problems, IRS has taken a number of positive steps to address its problems and consolidate its management control over systems development. However, as we noted in recent briefings to the acting IRS Commissioner and congressional committee staffs, neither the CIO nor any other organizational entity has sufficient authority needed to implement IRS’ Systems Life Cycle—its processes and products for managing information technology investments—or enforce architectural compliance agencywide. We will soon be making formal recommendations to IRS to address this issue. Finally, as we reported to you earlier this year, the problems encountered by the Health Care Financing Administration (HCFA) in its development of the Medicare Transaction System provide another example of the need for strong management over the development and implementation of information systems. In recent testimony on Medicare automated systems, we reemphasized the importance of establishing CIOs and involving them and other senior executives in information management decisions. While HCFA has recently established a CIO and an Information Technology Investment Review Board, the agency has not yet implemented an investment process—including senior management roles and responsibilities—that governs the selection, control, and evaluation of IT investments. Consequently, we have recommended that HCFA establish an investment management approach that explicitly links the roles and responsibilities of the CIO and Investment Review Board to relevant legislative mandates and requirements. Such actions are essential to ensure that HCFA’s—or any agency’s—information technology initiatives are cost-effective and serve its mission. Although the Clinger-Cohen Act did not call for the establishment of a federal CIO Council, the Administration is to be commended for taking the initiative to establish one through a July 1996 Executive Order. Our experience with the CFO Act shows the importance of having a central advisory group to help promote the implementation of financial management reform. The CFO Council, which has a statutory underpinning, has played a lead role in creating goals for improving federal financial management practices, providing sound advice to OMB on revisions to executive branch guidance and policy, and building a professional community of governmentwide financial management expertise. The CIO Council, chaired by OMB, can play a similarly useful role. As stated in its charter, the Council’s vision is to be a resource for helping promote the efficient and effective use of agency information resources. The Council serves as the principle forum for agency CIOs to develop recommendations for governmentwide information technology management policies, procedures, and standards; share experiences, ideas, and promising practices for improving promote cooperation in using information resources; address the federal government’s hiring and professional development needs for information management; and make recommendations and provide advice to OMB and the agencies on the governmentwide strategic plan required under the PRA. The CIO Council is currently going through a formative period. Since its first meeting in September 1996, the Council has engaged in a wide variety of activities. It meets on a monthly basis, bringing together CIOs, deputy CIOs, and representatives from major departments and agencies, as well as representatives from other organizations, such as the Small Agency Council, the CFO Council, and the Governmentwide Information Technology Services Board. The Council’s activities during its first year have largely revolved around four major areas. (1) Council organization: The Council decided how to organize and created operational procedures. (2) Committee specialization: The Council created five committees to focus on selected topics of concern emerging from initial sessions—the year 2000, capital planning and investment, interoperability, information resources management training and education, and outreach/strategic planning. Each committee has pursued agendas that include regular working group sessions to exchange ideas and identify promising management practices. (3) Topical forums: The Council has provided a regular forum for presentations and discussions of specific topics of shared concern, such as improving Internet security, enhancing the usefulness of budgetary reporting on federal information technology, understanding the use of governmentwide acquisition contracting mechanisms, developing effective systems architectures, and consolidating data center operations. (4) Governmentwide policy advice and recommendations: The Council has responded to OMB’s solicitation for comments on proposed federal information resources management policy revisions (the Federal Acquisition Regulations, Freedom of Information Act, the Privacy Act, the PRA); updates on critical issues such as Year 2000 progress; and guidance and feedback on agency reporting to meet OMB’s federal oversight requirements (such as preparing budget submissions for information assets under OMB Circular A-11). While these activities have proved useful, the Council does not yet have a strategic plan to help guide its work and serve as a benchmark for measuring progress. As we saw in the case of the CFO Council, achieving accomplishments that have strategic impact requires well-defined goals and measures. The CFO Council adopted a vision, goals, and strategies for financial management that have made it a much more productive body. The CFO Council now regularly reviews activities and, if necessary, revises Council priorities. In addition, the Council annually reports on its progress in implementing financial management reforms. Recognizing the need to focus its efforts, the CIO Council began to reassess and redefine its strategic direction this past summer. This October, the Council members met at a day-long planning conference to discuss and finalize their long-range strategy. They agreed to focus their work on five strategic goals: establish sound capital planning and investment processes at the agencies; ensure the implementation of security practices that gain public confidence and protect government services, privacy, and sensitive and national security information; lead federal efforts to successfully implement the Year 2000 conversions; assist agencies in obtaining access to human resources with the requisite skills and competencies to develop, maintain, manage, and utilize information technology programs, projects, and systems; and define, communicate, and establish the major elements of a federal information architecture, in support of government missions, that is open and interoperable. We believe that the CIO Council has selected the right set of issues to pursue. Several of these coincide with issues we raised in our 1997 High-Risk Series and recommendations we have formulated in conjunction with specific audit work. In addition, they parallel several concerns that the Congress—and this Subcommittee in particular—have raised about federal IT management. For example, the regular hearings and concerted effort by the Subcommittee on the Year 2000 computing crises have highlighted the urgency of the problem and helped to increase the attention and actions of federal executives. GAO has raised concerns about the pace at which federal agencies are moving to effectively address the Year 2000 problem. In consonance with industry best practices, we have also developed and disseminated an assessment guide to help agencies plan, manage, and evaluate their Year 2000 programs, and are using this as a basis for selected agency audits. In addition, we have strongly recommended that agencies adopt a capital planning and investment-oriented approach to information technology decision-making. It has been a key foundation for recommending improvements to the management of IRS’ Tax Systems Modernization, HCFA’s development of the Medicare Transaction System, and FAA’s air traffic control modernization. We worked with OMB in 1995 to issue governmentwide guidance on information technology investment management and we have also issued detailed guidance on how agencies can effectively implement an investment-oriented decision-making approach to their information technology spending decisions as expected under the Clinger-Cohen Act. Information security is also an issue of paramount importance to the information maintained and managed by the federal government. We have highlighted the reality of the government’s vulnerability and the urgent need to effectively identify and address systemic information security weaknesses. Moreover, in our September 1996 report on information security, we specifically recommended that the Council adopt information security as one of its top priorities. Also, building federal agencies’ capability to manage information resources has been a critical problem for years. Several of our recent reports, for instance, have focused on serious weaknesses in an agency’s capability to manage major technology initiatives, such as in the area of software acquisition or development. Similarly, our best practices work has shown the importance of pursuing improvement efforts within the context of an information architecture in order to maximize the potential of information technology to support reengineered business processes. We are encouraged by the Council’s intention to establish a strong strategic focus for its work and further refine and prioritize the areas where it can best make a difference. One of the noteworthy aspects of the Council’s goal-setting process was the members’ desire to move away from earlier draft language that defined the goals in terms of “promoting” and “supporting.” Instead, the Council is working to frame specific, outcome-oriented goals. At the conclusion of the conference, the Council set up committees for each of the goals and charged them to decide on specific objectives and performance measures. The Council’s aim is to complete this work quickly and publish its strategic plan in January 1998. There is great urgency to deal with these major information technology problems. It is important that the Council demonstrate how CIOs are helping to make a difference by showing progress this coming year. GAO and OMB have given the CIO Council a head start by publishing guidance on information technology capital investments, information security, and best practices in information technology management. By leveraging off this work, the Council should be able to build momentum quickly. Also, the CIO Council should follow the example set by the CFO Council, which publishes a joint report with OMB each year on its progress in meeting financial management goals. Having a visible yardstick will provide a strong incentive for both the Council and the agencies to make progress in meeting their information management goals and demonstrate positive impact on the agencies’ bottom line performance. Because it is essentially an advisory body, the CIO Council must rely on OMB’s support to see that its recommendations are implemented through federal information management policies, procedures, and standards. In the coming months, the Congress should expect to see the CIO Council becoming very active in providing input to OMB on the goals it has chosen. OMB, in turn, should be expected to take the Council’s recommendations and formulate appropriate information management polices and guidance to the agencies. There should be clear evidence that the CIO Council, OMB, and the individual CIOs are driving the implementation of information technology reforms at the agencies. Ultimately, the successful implementation of information management reforms depends heavily upon the skills and performance of the entire CIO organization within departments and agencies—not just the CIO as a single individual. We have emphasized this point in our recent guidance on information technology performance measurement. With this in mind, we are working to produce an evaluation guide that offers a useful framework for assessing the effectiveness of CIO organizations. As with our other guidance, we intend to ground this approach in common management characteristics and techniques prevalent in leading private and public sector organizations. Using this methodology that focuses on both management processes and information technology spending results, we can provide the Congress and the agencies with in-depth evaluations of CIO organizational effectiveness. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you and members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the importance of having strong agency chief information officers (CIO) and an effective CIO Council, focusing on its study of how leading private- and public-sector organizations control system development projects and successfully apply technology to improve their performance, which identified a specific set of strategic practices that these organizations use to improve performance through information management. GAO noted that: (1) senior executives in the successful organizations it studied were personally committed to improving the management of technology; (2) applicable laws make federal agency heads directly responsible for establishing goals and measuring progress in improving the use of information technology to enhance the productivity and efficiency of agency operations and assign a wide range of duties and responsibilities to CIOs; (3) agencies should place CIOs at a senior management level, working as a partner with other senior officials in decisionmaking on information management issues; (4) having effective CIOs will make a difference in building the institutional capacity and structure needed to implement sound management practices; (5) shortly after the Clinger-Cohen Act went into effect, the Office of Management and Budget (OMB) evaluated the status of CIO appointments at 27 agencies and noted that at several agencies, the CIO's duties, qualifications, and placement met the act's requirements; (6) however, OMB had concerns about a number of other agencies that had acting CIOs, CIOs whose qualifications did not appear to meet the act's requirements, or CIOs who did not report directly to the head of the agency; (7) OMB also raised concerns about agencies where the CIOs had other major management responsibilities or where it was unclear whether the CIOs' primary duty was information resource management; (8) one area that GAO will focus on is CIOs who have major responsibilities in addition to information management; (9) only 12 agencies have CIOs whose responsibilities are focused solely on information management; (10) GAO is particularly troubled by agencies that have vested CIO and Chief Financial Officer responsibilities in one person; (11) because it may be difficult for the CIO of a large department to adequately oversee and manage the specific information needs of the department's major subcomponents, GAO has also supported the establishment of a CIO structure at the subcomponent and bureau levels; (12) GAO has reported on instances where the subcomponent CIOs were not organizationally positioned and empowered to discharge key CIO functions; (13) the CFO Council has played a lead role in creating goals for improving federal financial management practices; the Council does not yet have a strategic plan to help guide its work and serve as a benchmark for measuring progress; and (14) ultimately, the successful implementation of information management reforms depends heavily upon the skills and performance of the entire CIO organization within departments and agencies, not just the CIO as an individual.
Yemen has been an important U.S. partner that faces significant humanitarian, economic, and security challenges. As figure 1 shows, Yemen shares a land border with Saudi Arabia, a key U.S. ally, and a maritime border with a critical shipping lane connecting the Red Sea and the Arabian Sea. The most impoverished country in the Middle East and North Africa region, Yemen is experiencing a rapidly growing population, which is estimated at about 25 million; increasing scarcity of natural resources, including water; the steady decline of oil, its primary export; extremely high unemployment; and dwindling revenues that decrease the government’s ability to fund basic operations. Moreover, Yemen is a safe haven for the terrorist group AQAP, which has been reported as al Qaeda’s most active affiliate, and which the Commander of the U.S. military’s Central Command identified as one of the most dangerous al Qaeda affiliates. Adding to these challenges, Yemen faces political instability following the unrest of the 2011 Arab Spring, an ongoing Houthi insurgency, and a southern separatist movement. In early 2011, mass protests began against the 33-year regime of President Ali Abdullah Saleh, resulting in his removal from office later that year, transferring power to his Vice President, Abdo Rabu Mansour Hadi, who was subsequently elected president in February 2012. Since coming to power, President Hadi has faced numerous challenges. In 2014, the Houthis, a northern Shiite tribal group, gained control of significant territory in Yemen, which led to their control of the capital, the seizure of key military bases, and control over key port facilities. According to State, as of March 2015, President Hadi had relocated to Saudi Arabia, where he continues his duties as President. The unstable security situation led State to temporarily suspend operations and remove all staff from the U.S. embassy in Sanaa. Other countries have also stopped operations at their embassies. According to State, the overarching objective of U.S. policy in Yemen is a successful democratic transition that promotes political, economic, and security sector reforms that will enable the Yemeni government to respond to the needs and aspirations of its people. National strategies related to broad U.S. counterterrorism and security assistance goals identify building partnership capacity as a key component of the U.S. counterterrorism strategy. Such strategies also stress a whole-of- government approach and synchronization of U.S. efforts across government agencies. Prior to the current instability, to assist in countering the AQAP threat, since fiscal year 2006, State and DOD have collectively allocated over $500 million in security assistance to Yemen through DOD’s Section 1206 and Section 1207(n) programs and State’s Foreign Military Financing (FMF) program. As seen in table 1, these programs have various goals, including building the capacity of military forces to conduct counterterrorism operations. In June 2014, State and DOD began an interagency effort to evaluate security assistance goals and priorities for Yemen, including how to employ the funding sources available in concert to build Yemeni security force capacity. According to DOD and State officials, this review has been paused as of March 2015 pending the resolution of Yemen’s unstable situation. Section 1206 and 1207(n) projects are implemented via a multi-step process involving both State and DOD officials. A key step in the process is developing a specific proposal for a project, which is entered into a template that seeks information on, among other things, the nature of the threat, the desired capability, and benefits to the United States so that State and DOD officials can evaluate the proposals. DOD and State have disbursed or committed almost $290 million of the nearly $500 million allocated to Sections 1206, Section 1207(n), and FMF assistance to Yemen since fiscal year 2009. DOD has disbursed about $256 million of the approximately $401 million Section 1206 and 1207(n) funds allocated to Yemen. State has committed $34 million of the $95 million FMF funds allocated to Yemen. Given the uncertainty in Yemen, security assistance programs are temporarily suspended. Since 2009, DOD disbursed almost $256 million, or almost two-thirds of the approximately $401 million allocated to Section 1206 and 1207(n) projects for Yemen. This Section 1206 and 1207(n) funding for fiscal years 2009 through 2014 comprised 15 counterterrorism capacity-building projects in Yemen, including efforts to enhance Yemeni security forces’ communications, border security, and special operations capabilities to combat terrorists and other violent extremists. See appendix II for a description of each project. As shown in table 2, DOD originally allocated $452 million but then reallocated almost $51 million for projects outside of Yemen. While DOD has obligated all of the remaining $401 million, as of the end of fiscal year 2014, DOD had $145 million in unliquidated obligations for Section 1206 and 1207(n) projects in Yemen. Funds for equipment and training under Section 1206 and 1207(n) programs must be obligated by the end of the fiscal year in which they are appropriated. DOD officials stated that, given the uncertainty in Yemen, they are reviewing security assistance planned for Yemen, including the $145 million in unliquidated obligations. Specifically, they are determining (1) how to proceed with respect to those funds that have been obligated for activities that have not yet occurred and may not occur; (2) whether to reduce future requests for Section 1206 and 1207(n) funds to Yemen, including the allocation of fiscal year 2015 funds; and (3) whether to redirect equipment already purchased but not yet transferred to Yemen. State has allocated $95 million of FMF assistance to Yemen since fiscal year 2009 and committed $34 million, more than one-third of the total FMF funds allocated to Yemen since fiscal year 2009, as shown in table 3. State allocated about $20 million per year to Yemen from fiscal years 2011 through FY 2014. However, State did not commit FMF funds in Yemen in fiscal years 2012, 2013, or 2014. As a result, more than $60 million in FMF funds were uncommitted. In addition, State’s fiscal year 2015 and 2016 budget requests included an additional $25 million for Yemen. According to State officials, they have temporarily suspended FMF assistance to Yemen and determining whether to use uncommitted FMF funds for other countries remains an option. State has the authority to deobligate FMF funds and reobligate them to other purposes. According to DOD data, nearly 75 percent of FMF funding allocations from fiscal year 2009 through 2014 were planned to be used to maintain previously furnished equipment, including some provided through the Section 1206 and 1207(n) programs. Because of the pause in FMF commitments from fiscal years 2011 through 2014, Yemen accumulated over $60 million in uncommitted FMF funding. Even with the accumulation of these funds, it is unclear if maintenance for all Section 1206 and 1207(n) equipment could have been fully funded. After correcting errors in DOD data, we determined that since 2009, at least 60 percent of overall assistance was on time; however, delays affected 10 of 11 Section 1206 and 1207(n) projects from fiscal years 2009 through 2013. We found weaknesses in DOD’s data systems regarding the congressional notification clearance date, which we corrected for, and the date when assistance was provided to Yemeni security forces, both of which are necessary to determine timeliness. DOD officials reported that many factors may hinder or help the speed of security assistance deliveries to Yemen. DOD has taken steps to address those factors that can cause delays, including creating a consolidation point for equipment. DOD officials report that these steps have improved accountability throughout the delivery process, addressed some logistical challenges, helped address challenges related to political protests and insecurity in Yemen, and improved efficiency. Congress requires DOD to notify it of planned Section 1206 and 1207(n) projects at least 15 days before beginning implementation of the projects. Once the congressional notification period ends, equipment is procured and then shipped to Yemen via a process that can include multiple waypoints. DOD notified Congress of plans to implement 15 Section 1206 and 1207(n) capacity-building projects for Yemen between FY 2009 and 2014. These congressional notifications stated that DOD would complete training and transfer associated equipment to Yemeni security forces within 18 months from the date the projects clear the congressional notification process. DOD can initiate an activity 15 days after it provides the required notification to Congress. Figure 2 shows the shipment process for Section 1206 and 1207(n) security assistance to Yemen. Equipment destined for Yemen may stop en route. DOD began using one potential stop, the Joint Consolidation Point (JCP) in Pennsylvania, for security assistance shipments in 2010. Equipment arrives at the JCP from various vendors and DOD implementing agencies at different times and is then shipped to Yemen. Following delivery in country, equipment is transferred to Yemeni security force recipients. According to DOD plans, this final phase must be completed within 18 months to meet the deadline stated in the congressional notifications. We found weaknesses in DOD’s data systems used to collect information on key dates throughout the shipment process for Sections 1206 and 1207(n) assistance. First, the data systems included incorrect information regarding the congressional notification clearance date. Second, DOD’s data systems did not contain complete information regarding when training is completed and equipment is transferred to the Yemeni security forces, though the quality of these data have improved in recent years following DOD implementation of a prior GAO recommendation. These dates are needed to determine whether equipment and training is transferred to Yemen on time. First, the date the congressional notification period ends is the date when DOD begins implementing projects to meet the 18-month transfer deadline, making it an essential starting point for assessing timeliness. For 11 of the 15 Section 1206 and 1207(n) projects notified to Congress from fiscal years 2009 through 2014, DOD data contained inaccurate dates for when the congressional notification period ended, which we had to correct in order to assess timeliness. Specifically, we found that 4 of the 15 congressional notification clearance dates in DOD’s data system were the same date DOD had notified Congress, another 6 dates were 1 to 3 days after Congress was notified, and 1 date was 9 days prior to the date Congress was notified. However, DOD officials reported they were unaware of any instances in which Congress had cleared projects prior to 15 days, despite these 11 instances of the DOD data indicating the contrary. Further, DOD was only able to produce the documentation needed to correct one of these dates. We confirmed that DOD did not implement any cases prior to the legally mandated 15-day notification period. However, the inaccurate data limit the ability of DOD and others to effectively assess the extent to which Section 1206 and 1207(n) assistance is transferred to Yemen on time or report to Congress on the status of assistance projects. Second, in order to determine whether the process for providing assistance met the 18-month deadline, it is essential to know when the equipment and training finally reached the Yemeni security forces (i.e. the transfer date). Although DOD collects data on the dates when Section 1206 and 1207(n) equipment first ships, its data systems do not contain complete information on when the training and equipment are finally transferred to the Yemeni security forces. Each of the 11 projects for which we assessed timeliness contained some line items for which the final transfer date was not documented in the data. DOD data indicated that these items were usually shipped in less than 18 months, although the items may have stopped at various waypoints and their date of final transfer cannot be determined. As a result, timeliness for those items could not be determined. Our previous work has identified similar issues related to the quality of DOD’s data on the delivery of security assistance programs, including Section 1206 and 1207(n). Specifically, we reported in 2012 that DOD data on the status of fiscal years 2007 through 2011 security assistance deliveries had information gaps such as missing information on the dates items departed U.S. shipping locations, and the date of receipt at the final destination. As a result, we recommended that DOD establish procedures to help ensure that DOD implementing agencies populate these data systems with complete data. In response to our recommendation, in May 2014, DOD updated its security assistance management manual to require security cooperation officers to report the delivery of equipment in DOD data systems within 30 days of delivery, programmed its systems to update shipment tracking information more frequently, and developed plans to ensure that accurate and timely delivery status information will be maintained in a new information system that DOD is developing. DOD officials also reported that it has taken steps to address the gaps in transfer data for Sections 1206 and 1207(n) for Yemen, and we found evidence of recent improvements when we analyzed the data. For the fiscal year 2009 and 2010 projects, the final transfer dates were available for only 3 percent of items, whereas the final transfer dates were available for 91 percent of the fiscal year 2012 and 2013 projects. Further, DOD officials reported that since fiscal year 2013, DOD has automated more of the data collection and hired additional personnel to manually enter the remaining data. DOD officials noted that these and other processes put into place in the last two years should help ensure that deliveries are timely, accurate, and properly coordinated with U.S. representatives in Yemen responsible for transferring the equipment to Yemeni security forces. After correcting DOD’s data, we were able to determine that at least 60 percent of fiscal years 2009-2013 Section 1206 and 1207(n) equipment was on time and 4 percent was late. We could not determine timeliness for the remaining 36 percent because of omitted transfer dates. Although DOD data contained inaccurate congressional notification clearance dates and lacked some final transfer dates, we were able to calculate the earliest possible dates DOD could have started the projects and assessed the timeliness of most security assistance for Yemen using a combination of DOD data and congressional notification letters. As shown in figure 3, at least 60 percent of the 4,323 line items of training and equipment destined for Yemen from fiscal years 2009 through 2013 were transferred to Yemeni security forces on time. For equipment that included transfer dates, we calculated that the length of time it took to transfer items to Yemeni security forces ranged from 1 month to more than 4 years, with an average of 17 months. Some of the items transferred on time included large items, such as aircraft, boats, and trucks. For example, five boats were transferred within 17 months, and several hand-launched unmanned aerial vehicles were transferred in less than 15 months. The deadlines for the four fiscal year 2014 projects have yet to pass, but for the remaining 11 projects for fiscal years 2009 through 2013, DOD notified Congress that all equipment and training should have been completely transferred to Yemeni security forces by the end of 2014. However, 10 of the 11 projects did not meet established deadlines. These 11 projects consisted of a total of 4,323 line items for equipment and training sessions, and each of the 10 projects notified to Congress from fiscal years 2009 through 2012 included some items that did not meet the DOD-established deadlines, as shown in figure 4. Examples of items that were late included spare parts for nearly all the projects and the following: coastal patrol boats and training for the 2009 Coast Guard Patrol Maritime Security Counterterrorism Initiative project; one CN-235 aircraft for the 2010 Yemen Fixed-Wing Capability Humvee trucks, small arms, and radios for the 2010 Special Operations Force Counterterrorism Enhancement; training and eight-passenger coastal patrol boats for the 2012 Special Operations Forces Counterterrorism Enhancement project; and small arms, night vision goggles, and spare parts for small, hand- launched unmanned aerial vehicles, for the 2012 Section 1207(n) project. DOD officials indicated that several factors affected their ability to ensure that Section 1206 and 1207(n) equipment was transferred to Yemeni security forces within the planned timelines. These include security- related factors, partner country factors, and logistical factors. Security-related factors. Because of the ongoing security threats in Yemen, ports of entry have had periods of limited accessibility and the U.S. embassy has had periods of reduced staffing. For example, prior to the suspension of operations at the embassy in February 2015, officials from the U.S. Office of Military Cooperation in Sanaa responsible for transferring equipment to Yemeni security forces indicated that it could take hours to conduct the required inventories of equipment prior to its transfer. However, at various points in the past, spending hours at an airport or seaport was considered too dangerous and arranging for protection and transportation was time- consuming. As a result, some inventories—and therefore transfers— of equipment were delayed. In addition, DOD has intentionally delayed deliveries in response to security threats. When protests threatened security in 2011, DOD delayed delivery of one CN-235 aircraft, keeping it in Spain rather than delivering it to an uncertain security environment. In November 2014, when Houthis seized control of the port in Al Hudaydah, embassy officials worked with DOD to delay a shipment of equipment for the 2013 Section 1206 Integrated Border and Maritime Security project. Partner country factors. Yemeni officials must be available to receive the security assistance being delivered, yet political transitions and cultural factors have affected their availability. For example, DOD notified Congress of a Section 1207(n) project in June 2012. However, as Yemeni security forces reorganized following the transition from President Saleh to President Hadi, the recipient unit within the Yemeni Ministry of Interior was not expanded as planned. As a result, in July 2014, 2 years after the original project was planned, DOD and State identified a new recipient and re-notified Congress of their plan to redirect almost $58 million of the originally notified $75 million in training and equipment. While many items were delivered within 18 months, other items were not shipped until after the re-notification, including sensitive equipment such as night vision goggles, which embassy officials preferred to keep secure in U.S. warehouses until the new Yemeni recipient was identified. As of December 2014, U.S. embassy officials stated that all associated equipment had been delivered to Yemen and most was being held in Yemeni security force warehouses so human rights training could be provided before transfer. DOD officials also reported that religious holidays and language barriers have sometimes resulted in delayed deliveries and transfers of equipment. Officials said they have had to delay deliveries of security assistance to avoid transferring equipment during the month of Ramadan, when Yemeni security forces responsible for receiving transferred equipment are likely to be observing religious practices. Officials also reported having delayed training because of the limited number of English speakers among the Yemeni security forces members with whom DOD officials work to complete transfers, as well as among forces scheduled to receive the training. Logistical factors. DOD officials reported that several logistical factors have led to delays in transfers of equipment to Yemen, such as paperwork errors, customs challenges, challenges presented by the type of delivery vehicle used (sea versus air), as well as lengthy equipment procurement timeframes and competing worldwide procurement and shipment prioritizations. For example, a CN-235 aircraft was transferred more than 3 years after congressional notification, in part because of a lengthy procurement process, in addition to country and security factors described earlier. In April 2010, DOD notified Congress of the plan to transfer the aircraft and associated training and spare parts within 18 months. However, the aircraft was ultimately transferred to Yemeni security forces in September 2013—more than 41 months after congressional notification. Officials reported that procurement was delayed and, although the contract was complete by September 2010, the aircraft cost more than originally anticipated. As a result of this discovery late in the fiscal year, DOD determined that the funding allocated to the project would be used to cover the costs of the aircraft and a new project would be developed with additional funding to provide the training and 2 years of spare parts that had been included in the original project. The transfer of the aircraft and training was further delayed because of political unrest in 2011. To address some of these factors and other challenges DOD previously identified with shipping equipment for building partner capacity programs, such as Section 1206 and 1207(n), DOD began using the JCP in Pennsylvania in 2010. DOD officials report that the JCP is used to consolidate equipment, generate additional data on equipment that transits there, and ensure that the equipment is accompanied by documents designed to improve customs processing and transfer processes in Yemen. DOD officials report that the JCP helps ensure that equipment that passes through the point is documented in DOD data and includes sufficient documentation to aid its clearance through customs and efficient transfer to partner country security forces, including those in Yemen. DOD assessments and DOD officials from the Office of Military Cooperation in Sanaa reported that the JCP has improved the timeliness of security assistance for Yemen. Our analysis of DOD data for security assistance for Yemen from fiscal years 2009 from 2013 indicated that equipment that passed through the JCP stayed there for an average of about 10 months. JCP officials noted a number of criteria used to determine when to time the shipments from the JCP to Yemen. Specifically, officials reported that they work to balance delivering equipment in a timely manner to address critical needs with making efficient use of each shipment and trying to fill an entire government- contracted aircraft. They also aim, in consultation with the U.S. Office of Military Cooperation in Sanaa, to deliver complete capabilities in one delivery and to time these deliveries so training can begin on those capabilities rather than delaying deliveries and training until shipment processing of equipment for other capabilities is complete. Further, DOD officials reported that DOD has conducted weekly teleconferences with U.S. officials in Sanaa and has surveyed these officials following each equipment transfer to determine their satisfaction with the timing of shipments and transfers of security assistance, as well as identifying the embassy’s priorities for future transfers. In addition, in December 2014, DOD conducted a review with U.S. Central Command and the Office of the Assistant Secretary of Defense for Special Operations and Low- Intensity Conflict to assess the extent to which these stakeholders were satisfied with the transfer timeliness for each Section 1206 and 1207(n) project whose deadlines had passed from fiscal years 2009 through 2013, including equipment that passed through the JCP. These stakeholders assessed each of the projects as successful. Even though some training was not conducted, some equipment was not provided, and some equipment and training arrived in Yemen late, they believed that the key equipment arrived on time and thus the equipment was sufficient to begin operations. DOD includes 2 years of spare parts for short-term maintenance needs of Section 1206 and 1207(n) assistance and has resumed requesting the source and extent of anticipated U.S. funds, such as FMF, that are needed for longer-term needs. A presidential directive and DOD guidance require planning for the maintenance of security assistance equipment, regardless of the host nation’s capability to maintain it. Although FMF has been a significant source of funds for maintaining Section 1206 and 1207(n) equipment, the fiscal year 2015 project proposal template for Section 1206 projects did not require information regarding the source or amount of U.S. funds needed for long-term maintenance, including any anticipated FMF needs. After reviewing a draft of this report, DOD officials provided updated documentation, including the fiscal year 2016 project proposal template. As it did through fiscal year 2014, the fiscal year 2016 template resumes asking for specific information on the availability of FMF if a partner country is unlikely to cover the expected costs of maintenance. DOD officials have also indicated several factors that impede maintenance efforts, including factors related to security, the partner country, and logistics. In some cases where maintenance has not been performed, some equipment was no longer fully operational. DOD guidance and a presidential directive require planning for maintaining security assistance, including Section 1206 and 1207(n) projects. Presidential Policy Directive 23 stresses the need for long-term, sustainable commitments. DOD guidance acknowledges that adequate maintenance is a long-term need. Specifically, DOD’s Joint Doctrine Note 1-13 (Security Force Assistance) states that sustainability is essential to security force assistance activities—regardless of the host nation’s capability to sustain them. The Fiscal Year 2015 National Defense Authorization Act also includes a requirement that DOD notify Congress of any arrangements for the sustainment of a Section 1206 project, the source of any maintenance funds, and the performance outcomes it expects to achieve beyond the project’s planned completion date. As noted earlier, Section 1206 and 1207(n) projects are implemented following a process involving both State and DOD officials. A key step in the process is developing a specific proposal for a project, which is entered into a template and contains information such as the nature of the threat, the desired capability, and benefits to the United States so that State and DOD officials can evaluate the proposals. For short-term maintenance needs, the Section 1206 and 1207(n) project proposal template, starting in fiscal year 2011, has indicated that each project should contain spare parts for 2 years of maintenance. Our analysis found that all Section 1206 proposals since 2012 included 2 years worth of spare parts. The template for fiscal year 2016 Section 1206 project proposals has resumed including fields identifying the source and amount of anticipated U.S. funding for long-term maintenance if partner country funds are not expected to cover the anticipated costs of long-term maintenance. After the 2-year spare parts package is exhausted, DOD has generally relied on FMF funding to provide maintenance. Prior to fiscal year 2015, the project proposal templates included fields related to maintenance in addition to any anticipated FMF needs in the future. Specifically, from fiscal years 2011 through 2014, Section 1206 project proposal templates included maintenance-related fields regarding the anticipated transition to FMF, whether an FMF request had been submitted, and the duration of the need for FMF. Additionally, in fiscal years 2013 and 2014, the templates also asked for an estimated annual cost for FMF needs. The fiscal year 2015 Section 1206 proposal template no longer requested information on the source of long-term maintenance funding, including FMF needs if the partner country could not cover expected long-term maintenance costs, but the fiscal year 2016 template has resumed collecting information on FMF. While the fiscal year 2015 guidance called for detailed sustainment plans, the fiscal year 2015 project proposal template only solicited information on general sustainment costs and the partner nation’s ability to contribute to sustaining the project. In an attempt to streamline its template, DOD officials removed explicit mentions of FMF in the fiscal year 2015 project proposal template. According to the Office of the Assistant Secretary of Defense for Special Operations/Low-Intensity Conflict, the change reflects new legislative language in the 2015 National Defense Authorization Act that requires information on the sustainment plan for the proposed program, which could include information on FMF. DOD data show that nearly 75 percent of FMF funding allocations from fiscal years 2009 through 2014 for Yemen were planned to be used in support of previously furnished equipment, including equipment provided through the Section 1206 and 1207(n) programs. Some equipment provided by Section 1206 is not fully operational and needs either repairs or spare parts. For example, according to DOD officials, a CN-235 aircraft provided to Yemen by a Section 1206 project was grounded in October 2014 because it lacked spare tires, resulting in a loss of medium-lift capability for the Yemeni Air Force. Embassy documents report that, as of August, 2014, three of four Huey II helicopters provided by another Section 1206 project were minimally operational. They could only be flown safely for training purposes and were unable to conduct counterterrorism missions. In addition, both of the large coastal patrol boats provided to the Yemeni Coast Guard by Section 1206 funding were in disrepair and require a maintenance overhaul. DOD has noted that maintenance and sustainment continue to be key priorities for security assistance. DOD officials indicated that several factors impede their ability to ensure that Section 1206 and 1207(n) equipment is properly maintained in Yemen, including security-related factors, partner country factors, and logistical factors. Security-related factors. Because of ongoing political instability and security threats in Yemen, the U.S. embassy has ordered some of its employees to depart from the country twice in the past few years, and since February 2015 the U.S. embassy has temporarily suspended operations. DOD officials indicated that this has hampered their efforts to maintain relationships with their Yemeni counterparts and ensure that Section 1206 and 1207(n) equipment is properly maintained. In addition, some DOD officials stated that even when the embassy was open, their ability to travel outside of Sanaa was limited because of security concerns, which also limits the extent to which U.S. officials can monitor or inventory the U.S.-funded equipment. Partner country factors. U.S. officials must work with their Yemeni counterparts to use FMF funds. According to DOD officials, while Yemeni cooperation in executing maintenance programs is important, ensuring collaboration in developing requests and prioritizing maintenance requirements is often a lengthy process and can delay requests for maintenance funding. For example, the development of a proposal with Yemen to maintain an airplane purchased under Section 1206 took 9 months. Logistical factors. DOD officials reported that lengthy equipment procurement timeframes and competing worldwide procurement and shipment priorities can delay maintenance equipment bound for Yemen. DOD officials also explained that paperwork errors and customs challenges sometimes delay maintenance for certain equipment in Yemen. AQAP terrorists based in Yemen continue to be a threat to the United States and Yemen’s national security. The United States has invested more than $500 million in security assistance to Yemen since fiscal year 2006 to build Yemen’s counterterrorism capacity—much of this amount has been Section 1206 and 1207(n) funding to provide equipment for Yemeni security forces to combat security threats. DOD is required to notify Congress of Section 1206 and 1207(n) projects and wait 15 days before implementing them. However, we found that DOD’s data systems used to track related security assistance contain inaccurate information regarding when projects clear the congressional notification period. We also found incomplete information on the transfer of over one-third of the equipment intended for Yemeni security forces, although the completeness of information had improved since 2013. As a result, DOD’s data do not allow it or a third party to accurately and readily assess its performance against the 18-month transfer deadlines set in its notifications to Congress. To further improve the ability of U.S. government agencies and others to assess the timeliness of U.S. security assistance to Yemen, we recommend that the Secretary of Defense take steps to improve the accuracy of data used to track when Section 1206 projects are congressionally cleared for implementation. We provided a draft of this report to State and DOD for their review and comment. State provided technical comments, which we have incorporated as appropriate. DOD provided written comments, which are reprinted in appendix III. DOD concurred with our recommendations. DOD concurred with our recommendation to improve the accuracy of data used to track congressional notification clearance dates. DOD noted steps taken since fiscal year 2013 to improve data collection. We acknowledge DOD’s progress in ensuring the accuracy of dates of final transfer, as we previously recommended in GAO-13-84. We maintain that the dates of the congressional notification process are also important in determining timeliness and thus should be accurate. In our draft, we also included a second draft recommendation that DOD resume identifying the amount of anticipated long-term maintenance funds for Section 1206 projects. DOD concurred with our second recommendation. In addition, after reviewing a draft of our report, DOD provided updated documentation that included the fiscal year 2016 Section 1206 project proposal template, which reinstated a request for information on the amount of anticipated long-term maintenance funds, if any. The fiscal year 2016 template addresses our concerns because it provides a means to collect information regarding FMF and other potential U.S. funding options that could be used if the partner nation is unlikely to cover expected long-term maintenance costs. As a result, we removed this second recommendation from the final report. We are sending copies of this report to the appropriate congressional committees and the Secretaries of Defense and State. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7331 or johnsoncm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In 2014, Senate Report 113-176, which accompanied the proposed Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act of 2015, S. 2410, included a provision for GAO to report on several issues pertaining to security assistance to Yemen. This report examines (1) the disbursement of funds allocated for key security assistance programs for Yemen since 2009, (2) the timeliness of Section 1206 and 1207(n) assistance, and (3) the Department of Defense’s (DOD) plans for maintaining equipment provided to Yemen under Section 1206 and 1207(n). To address our objectives, we reviewed and analyzed relevant national security strategies, key congressional legislation, and planning documents related to U.S. security assistance to Yemen. We discussed U.S. strategies, programs, and activities related to security assistance to Yemen with U.S. officials from DOD, the Department of State (State), the U.S. embassy in Yemen, and the intelligence community. We planned to travel to Yemen but were unable to do so because of Yemen’s unstable security environment during the time of our review. However, we were able to interview officials from the U.S. Office of Military Cooperation and Special Forces in Yemen via video teleconference and in person while they were in Washington, D.C. To address security assistance disbursements, we reviewed authorizing legislation for Section 1206, Section 1207(n), and State’s Foreign Military Financing (FMF) programs. We also analyzed funding data including allocations, obligations, and disbursements for Yemen from fiscal years 2009 through 2014. DOD and State provided data on allocations, amounts reallocated, unobligated balances, unliquidated obligations, and either disbursements of Section 1206 and 1207(n) funds or commitments of FMF funds. We analyzed these data to determine the extent to which funds from these three programs had been disbursed or committed. We assessed these data by interviewing cognizant agency officials and comparing the data with previously published data, as well as verifying them with congressional notifications and case closure receipts to determine that they were sufficiently reliable for our purposes. To assess the extent to which transfers of fiscal years 2009 through 2014 Section 1206 and 1207(n) equipment and training to Yemeni security forces have been timely and the efforts State and DOD have made to address factors affecting the timeliness of these transfers, we reviewed DOD and embassy documents, analyzed DOD transfer data against criteria identified in congressional notifications, and interviewed DOD and State officials. We used these sources to describe the key steps in the process for shipping assistance to Yemen and associated timelines. In the congressional notifications, DOD described the Section 1206 and 1207(n) projects it intended to implement and established an 18-month deadline for transferring all equipment and services related to the projects to Yemeni security forces. The process includes two dates that are key to assessing whether DOD has met its 18-month deadline: the date the congressional notification period ends, allowing DOD to begin implementing a project, and the date the equipment and training are transferred to Yemeni security forces. During the course of our review, we found inaccuracies in DOD’s data regarding the congressional notification period and omissions in the data regarding the transfer dates. DOD notified Congress of plans to implement 15 Section 1206 and 1207(n) projects in fiscal years 2009 through 2014. For two-thirds of these projects, DOD data contained inaccurate congressional notification clearance dates—the initial step in the 18-month shipment and transfer process. However, using the legislation and copies of congressional notifications, we were able to calculate the earliest possible dates DOD could have started implementing the Section 1206 and 1207(n) projects. We obtained documents indicating the dates when DOD notified Congress of its planned Section 1206 and 1207(n) projects for Yemen, added the required minimum of 15 days to those dates, and used these new dates as the start of the 18-month deadline. We then used DOD data on shipments and transfers of equipment and training to assess the extent to which security assistance was transferred to Yemeni security forces within 18 months. However, as we relied on the earliest possible dates DOD could have begun implementing the projects, and the 18-month timeline starts at implementation, our analysis may understate the timeliness of transfers as, in some cases, implementation could have begun several days afterwards because Congress may have requested additional information that delayed the start. Of the 15 Section 1206 and 1207(n) projects notified to Congress in fiscal years 2009 through 2014, 11 had deadlines that had already passed at the time of our review; we did not assess timeliness for the remaining 4 projects because their deadlines had not yet passed. We assessed timeliness for the 11 fiscal years 2009 through 2013 Section 1206 and 1207(n) projects, which consisted of a total of 4,323 line items. Projects ranged from 1 line item to 1,897 line items. DOD data contained dates for the final transfer of equipment for 62 percent of the line items related to fiscal years 2009 through 2013 Section 1206/1207(n) projects. For the remaining 38 percent, we calculated whether shipment took place prior to established transfer deadlines. DOD data were current as of January 7, 2015. Shipment indicates the first step in the process of shipping items, though equipment may stop at various waypoints between initial shipment and final transfer to Yemeni security forces. We used the following criteria for classifying the timeliness of items: On time: The transfer date fell on or before the 18-month deadline. Late: The transfer date fell after the 18-month deadline, or—for items lacking a transfer date—the shipment date fell after the 18-month deadline for transfer. Cannot determine: The shipment date that fell prior to the 18-month deadline but no transfer date was documented. Such items may have been transferred on time, they may have been late, or they may not have been transferred yet. Discussions with DOD officials and review of DOD documents indicated that some of these items were ultimately transferred on time and some were transferred late, but the transfer dates were not included in the data. DOD also shipped some items to its Joint Consolidation Point in Pennsylvania or other waypoints en route to Yemen and then held the equipment at those waypoints because of security concerns, leaving these items without transfer dates because transfer has not yet occurred. To assess the reliability of the transfer dates that DOD provided, we interviewed cognizant officials about their processes for entering data, performed basic logic checks, and spot-checked receipts that DOD provided for certain items that had been transferred. Based on our assessment, we determined that the transfer data in the database were sufficiently reliable for our purposes. DOD officials noted several reasons 38 percent of data were missing transfer dates, including the failure of security cooperation officers to enter the dates in the data system, or the failure of vendors delivering large items directly to Yemen to submit the relevant information. They reported that DOD began efforts to address these challenges in 2012. In fact, for the fiscal years 2009 and 2010 projects in our review, the final transfer dates were only available for 3 percent of items, whereas transfer dates were available for 91 percent of the fiscal year 2012 and 2013 projects. Our previous work has identified similar issues related to the quality of DOD’s data on the delivery of security assistance programs, including Section 1206 and 1207(n). In 2012, we recommended that DOD establish procedures to help ensure that DOD implementing agencies populate these data systems with complete data. In response to our recommendation, in May 2014, DOD updated its security assistance management manual to require security cooperation officers to report the delivery of equipment in DOD data systems within 30 days of delivery, programmed its systems to update shipment tracking information more frequently, and developed plans to ensure that accurate and timely delivery status information will be maintained in a new information system that DOD is developing. We assessed the other variables that we used, namely the project titles, lines items, and project year, by crosschecking them against information from DOD officials, congressional notifications, and other DOD documents to determine that they were sufficiently reliable for our purposes. However, we determined that the data on the value of each line item were not sufficiently reliable to determine timeliness by percentage of dollar value transferred per project. Therefore, in table 4 of this report (see app. II), we report overall project allocation and descriptions of each project. In addition, as noted above, our estimates may understate timeliness because we estimated based on the earliest possible implementation dates rather than the actual implementation dates. To analyze the extent to which equipment provided to Yemen under Section 1206 and 1207(n) has been maintained, we examined legislation, presidential directive, and agency guidance to determine the requirements for short- and long-term maintenance planning. We also reviewed plans for maintenance in Section 1206 and 1207(n) project proposals to determine whether they included the source and amount of funding proposed for maintaining the equipment. We analyzed DOD data on Section 1206 and 1207(n) projects for the inclusion of line items related to maintenance training and spare parts. We also analyzed plans to finance and provide future maintenance support to Yemeni security forces. We interviewed DOD and embassy officials on the maintenance status of equipment already in Yemen and to identify factors that impeded maintaining equipment provided under Section 1206 and 1207(n). We conducted this performance audit from December 2014 to April 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. From fiscal years 2009 through 2014, the Department of Defense (DOD) allocated $401 million in counterterrorism training and equipment to Yemeni security forces through its Section 1206 and Section 1207(n) security assistance programs. Table 4 identifies the 15 projects that DOD undertook using Section 1206 and Section 1207(n) funding and the specific capabilities DOD planned to build through these efforts. The following are GAO comments on the Department of Defense’s (DOD) letter in response to GAO’s draft report. 1. DOD concurred with our recommendation, but did not address how DOD would specifically improve the accuracy of data regarding dates when a Section 1206 project is congressionally cleared for implementation. Rather, DOD’s response highlights changes DSCA made in fiscal year 2013 to improve overall data collection, including a discussion of the Joint Consolidation Point. We found improvements in DOD’s data on transfer dates for Sections 1206 and 1207(n) assistance to Yemen, but we continue to believe DOD needs to take steps to ensure that accurate dates for congressional clearance are reflected in its data systems. 2. DOD concurred with our recommendation but suggested that our analysis of sustainment costs did not reflect the full picture. After reviewing a draft of our report, DOD provided updated documentation, including the fiscal year 2016 Section 1206 project proposal template. We have updated the report to include additional information, including information regarding the fiscal year 2016 project proposal template. The fiscal year 2016 template addresses our concerns by asking for information on the source of any U.S. funding if a partner country is not expected to fully cover the costs of long-term maintenance. We have thus removed this recommendation from the final report. 3. DOD asserted that our timeliness assessment should have focused on larger end items. Our report acknowledged that, in some cases, large end items were transferred within the 18 months and that this focus on key capabilities is a priority for DOD. Other large end items— such as the CN-235 aircraft that was transferred more than 41 months after congressional notification—were late. In addition, as noted in our report, in its notifications to Congress, DOD has stated that all deliveries will be complete within 18 months. DOD has not indicated in these notifications that this deadline applies only to what it considers to be the main components. Further, DOD’s data lack final transfer dates for larger end items as well as for smaller ones. Finally, as noted in our report, DOD data on the key dates related to timeliness were not sufficiently reliable at levels other than the requisition-level, and the data on dollar value at this level of detail were not available. Therefore, we could only assess timeliness based on the transfer of individual pieces of equipment rather than dollar value or other criteria. Charles Michael Johnson, Jr., (202) 512-7331, or johnsoncm@gao.gov. In addition to the contact named above, Jason Bair (Assistant Director), Brian Hackney (Analyst-In-Charge), Katherine Forsyth, Kathryn Bolduc, Lynn Cothern, Martin De Alteriis, and Mark Dowling made key contributions to this report. Tina Cheng also provided technical assistance.
Al Qaeda in the Arabian Peninsula (AQAP) is one of the top U.S. national security threats. AQAP is based in Yemen, where political conflict, including a Houthi insurgency, has complicated stability. Since fiscal year 2006, DOD and State have allocated over $500 million to provide training and equipment to the Yemeni security forces to assist Yemen in combating AQAP. Such assistance has been provided through three main programs: Section 1206 and Section 1207(n), which have been used to build Yemeni capacity, and FMF, which has been used to maintain equipment provided to Yemen. A Senate report included a provision for GAO to review U.S. security assistance to Yemen. GAO examined (1) the disbursement of funds allocated to key security assistance programs for Yemen since fiscal year 2009, (2) the timeliness of Section 1206 and 1207(n) assistance, and (3) DOD plans for maintaining equipment provided to Yemen under Section 1206 and 1207(n), including the use of FMF. GAO reviewed agency documents, analyzed DOD and State data, and met with U.S. officials based in Washington, D.C., and Sanaa, Yemen. Since fiscal year 2009, the Department of Defense (DOD) has disbursed almost $256 million of the $401 million allocated to Yemen under the Section 1206 and 1207(n) security assistance programs, while the Department of State (State) has committed $34 million of the $95 million allocated under the Foreign Military Financing (FMF) program. In light of Yemen's currently unstable situation, security assistance programs to Yemen are temporarily suspended. After correcting errors in DOD data, GAO determined that at least 60 percent of the Section 1206 and 1207(n) assistance from fiscal years 2009 through 2013 was timely, but delays affected 10 of 11 projects. DOD notified Congress that all training and equipment for each project would be transferred to Yemeni security forces within DOD's established deadline of 18 months. However, DOD's data contained inaccurate information regarding when the congressional notification period ended, which clears DOD to implement these projects. The inaccurate data limit DOD's and third parties' ability to readily assess the extent to which these projects met the 18-month deadline or to report to Congress on the status of assistance projects. Specifically, after correcting errors in DOD data, GAO found that at least 60 percent of the items were transferred on time, 4 percent of the items were late, and the remaining 36 percent of items were shipped but DOD's data system did not have information on when they were transferred to Yemen. The 4 percent of late items were arrayed among 10 of 11 projects. DOD plans for short-term (i.e., 2 years) maintenance needs for Section 1206 and 1207(n) projects and has resumed requesting the source and amount of long-term maintenance funds. A presidential directive and DOD guidance call for long-term maintenance planning, regardless of the partner country's ability to contribute. From fiscal years 2011 through 2014, DOD requested specific information on the amount and source of anticipated U.S. maintenance funding, if any, in the Section 1206 project proposal template. The fiscal year 2015 template did not request such information, but after reviewing a draft of GAO's report, DOD provided a copy of its fiscal year 2016 template, which requests additional information on long-term maintenance plans. DOD officials noted that several factors impede maintenance efforts and some equipment is not fully operational. GAO recommends that DOD take steps to improve the accuracy of data regarding Section 1206 congressional notification clearance. DOD concurred and noted steps it took in fiscal year 2013 to improve overall data collection, but did not discuss improving data on congressional notification clearance dates. GAO continues to maintain that DOD should take steps to improve the accuracy of its data on congressional notification clearance dates.
Three types of Internet pharmacies selling prescription drugs directly to consumers have emerged in recent years. First, some Internet pharmacies operate much like traditional drugstores or mail-order pharmacies: they dispense drugs only after receiving prescriptions from consumers or their physicians. Other Internet pharmacies provide customers medication without a physical examination by a physician. In place of the traditional face-to-face physician/patient consultation, the consumer fills out a medical questionnaire that is reportedly evaluated by a physician affiliated with the pharmacy. If the physician approves the questionnaire, he or she authorizes the online pharmacy to send the medication to the patient. This practice tends to be largely limited to “lifestyle” prescription drugs, such as those that alleviate allergies, promote hair growth, treat impotence, or control weight. Finally, some Internet pharmacies dispense medication without a prescription. Regardless of their methods, all Web sites selling prescription drugs are governed by the same complex network of laws and regulations at both the state and federal levels that govern traditional drugstores and mail-order drug services. In the United States, prescription drugs must be prescribed and dispensed by licensed health care professionals, who can help ensure proper dosing and administration and provide important information on the drug’s use to customers. To legally dispense a prescription drug, a pharmacist licensed with the state and working in a pharmacy licensed by the state must be presented a valid prescription from a licensed health care professional. Every state requires resident pharmacists and pharmacies to be licensed. The regulation of the practice of pharmacy is rooted in state pharmacy practice acts and regulations enforced by the state boards of pharmacy, which are responsible for licensing pharmacists and pharmacies. The state boards of pharmacy also are responsible for routinely inspecting pharmacies, ensuring that pharmacists and pharmacies comply with applicable state and federal laws, and investigating and disciplining those that fail to comply. In addition, 40 states require out-of-state pharmacies—called nonresident pharmacies—that dispense prescription drugs to state residents to be licensed or registered. Some state pharmacy boards regulate Internet pharmacies according to the same standards that apply to nonresident pharmacies. State pharmacy boards’ standards may require that nonresident pharmacies do the following: maintain separate records of prescription drugs dispensed to customers in the state so that these records are readily retrievable from the records of prescription drugs dispensed to other customers; provide a toll-free telephone number for communication between customers in the state and a pharmacist at the nonresident pharmacy and affix this telephone number to each prescription drug label; provide the location, names, and titles of all principal corporate officers; provide a list of all pharmacists who are dispensing prescription drugs to customers in the state; designate a pharmacist who is responsible for all prescription drugs dispensed to customers in the state; provide a copy of the most recent inspection report issued by the home provide a copy of the most recent license issued by the home state. States also are responsible for regulating the practice of medicine. All states require that physicians practicing in the state be licensed to do so. State medical practice laws generally outline standards for the practice of medicine and delegate the responsibility of regulating physicians to state medical boards. State medical boards license physicians and grant them prescribing privileges.In addition, state medical boards investigate complaints and impose sanctions for violations of the state medical practice laws. While states have jurisdiction within their borders, the sale of prescription drugs on the Internet can occur across state lines. The sale of prescription drugs between states or as a result of importation falls under the jurisdiction of the federal government. FDA is responsible for ensuring the safety, effectiveness, and quality of domestic and imported pharmaceutical products under the FDCA. Specifically, FDA establishes standards for the safety, effectiveness, and manufacture of prescription drugs that must be met before they are approved for the U.S. market. FDA can take action against (1) the importation, sale, or distribution of an adulterated, misbranded, or unapproved drug; (2) the illegal promotion of a drug; (3) the sale or dispensing of a prescription drug without a valid prescription; and (4) the sale and dispensing of counterfeit drugs. If judicial intervention is required, Justice will become involved to enforce the FDCA. Justice also enforces other consumer protection statutes for which the primary regulatory authorities are administrative agencies such as FDA and FTC. FTC has responsibility for preventing deceptive or unfair acts or practices in commerce and has authority to bring an enforcement action when an Internet pharmacy makes false or misleading claims about its products or services. Finally, Justice’s DEA regulates controlled substances, which includes issuing all permits for the importation of pharmaceutical controlled substances and registering all legitimate importers and exporters, while Customs and the Postal Service enforce statutes and regulations governing the importation and domestic mailing of drugs. The very nature of the Internet makes identifying all pharmacies operating on it difficult. As a result, the precise number of Internet pharmacies selling prescription drugs directly to consumers is unknown. We identified 190 Internet pharmacies selling prescription drugs directly to consumers, 79 of which dispense prescription drugs without a prescription or on the basis of a consumer’s having completed an online questionnaire (see table 1). Also, 185 of the identified Internet pharmacies did not disclose the states where they were licensed to dispense prescription drugs, and 37 did not provide an address or telephone number permitting the consumer to contact them if problems arose. Obtaining prescription drugs from unlicensed pharmacies without adequate physician supervision, including an examination, places consumers at risk of harmful side effects, possibly even death, from drugs that may be inappropriate for them. Estimates of the number of Internet pharmacies range from 200 to 400. However, it is difficult to determine the precise number of Internet pharmacies selling prescription drugs directly to consumers because Internet sites can be easily created and removed and some Internet pharmacies operate for a period of time at one Internet address and then close and reappear under another name. In addition, many Internet pharmacies have multiple portal sites (independent Web pages that connect to a single pharmacy). We found 95 sites that at first appeared to be discrete Internet pharmacies but were actually portal sites. As consumers click on the icons and links provided, they are brought to an Internet site that is completely different from the one they originally visited. Consumers may be unaware of these site changes unless they are paying close attention to the Internet site address bar on their browser. Some Internet pharmacies had as many as 18 portal sites. About 58 percent, or 111, of the Internet pharmacies we identified told consumers that they had to provide a prescription from their physician to purchase prescription drugs. Prescriptions may be submitted to an Internet pharmacy in various ways, including by mail or fax and through contact between the consumer’s physician or current pharmacy and the Internet pharmacy. The Internet pharmacy then verifies that a licensed physician actually has issued the prescription to the patient before it dispenses any drugs. Internet pharmacies that require a prescription from a physician generally operate similarly to traditional drugstore or mail-order pharmacies. In some instances, the Internet site is owned by or affiliated with a traditional drugstore. We identified 54 Internet pharmacies that issued prescriptions and dispensed medications on the basis of an online questionnaire. Generally, these short, easy-to-complete questionnaires asked about the consumer’s health profile, medical history, current medication use, and diagnosis. In some instances, pharmacies provided the answers necessary to obtain the prescription by placing checks next to the “correct” answers. Information on many of the Internet sites indicated that a physician reviews the questionnaire and then issues a prescription. The cost of the physician’s review ranged from $35 to $85, with most sites charging $75.Moreover, certain illegal and unethical prescribing and dispensing practices are occurring through some Internet pharmacies that focus solely on prescribing and dispensing certain “lifestyle” drugs, such as diet medications and drugs to treat impotence. We also identified 25 Internet pharmacies that dispensed prescription drugs without prescriptions. In the United States, it is illegal to sell or dispense a prescription drug without a prescription. Nevertheless, to obtain a drug from these Internet pharmacies, the consumer was asked only to complete an order form indicating the type and quantity of the drug desired and to provide credit card billing information. Twenty-one of these 25 Internet pharmacies were located outside the United States; the location of the remaining 4 could not be determined. Generally, it is illegal to import prescription drugs that are not approved by FDA and manufactured in an FDA-approved facility.Obtaining prescription drugs from foreign-based Internet pharmacies places consumers at risk from counterfeit or unapproved drugs, or drugs that were manufactured and stored under poor conditions. The Internet pharmacies that we identified varied significantly in the information that they disclosed on their Web sites. For instance, 153 of the 190 Internet pharmacies we reviewed provided a mailing address or telephone number (see table 1). The lack of adequate identifying information prevents consumers from contacting Internet pharmacies if problems should arise. More importantly, most Internet pharmacies did not disclose the states where they were licensed to dispense prescription drugs. We contacted all U.S.-based Internet pharmacies to obtain this information.We then asked pharmacy boards in the 12 states with the largest numbers of licensed Internet pharmacies (70 in all) to verify their licensure status. Sixty-four pharmacies required a prescription to dispense drugs; of these, 22, or about 34 percent, were not licensed in one or more of the states in which they had told us they were licensed and in which they dispensed drugs. Internet pharmacies that issued prescriptions on the basis of online questionnaires disclosed even less information on their Web sites. Only 1 of the 54 Internet pharmacies disclosed the name of the physician responsible for reviewing questionnaires and issuing prescriptions. We attempted to contact 45 of these Internet pharmacies to obtain their licensure status; we did not attempt to contact 9 because they were located overseas. We were unable to reach 13 because they did not provide, and we could not obtain, a mailing address or telephone number. In addition, 18 would not return repeated telephone calls, 3 were closed, and 2 refused to tell us where they were licensed. As a result, we were able to obtain licensure information for only nine Internet pharmacies affiliated with physicians that prescribe online. We found that six of the nine prescribing pharmacies were not licensed in one or more of the states in which they had told us they were licensed and in which they dispensed prescription drugs. The ability to buy prescription drugs from Internet pharmacies not licensed in the state where the customer is located and without appropriate physician supervision, including an examination, means that important safeguards related to the doctor/patient relationship and intrinsic to conventional prescribing are bypassed. We also found that only 44 Internet pharmacies (23 percent) posted a privacy statement on their Web sites. As recent studies have indicated, consumers are concerned about safeguarding their personal health information online and about potential transfers to third parties of the personal information they have given to online businesses.The majority of these pharmacies stated that the information provided by the patient would be kept confidential and would not be sold or traded to third parties. Our review of state privacy laws revealed that at least 21 states have laws protecting the privacy of pharmacy information. While the federal Health Insurance Portability and Accountability Act of 1996 called for nationwide protections for the privacy and security of electronic health information, including pharmacy data, regulations have not yet been finalized. State pharmacy and medical boards have policies created to regulate brick and mortar pharmacies and traditional doctor/patient relationships. However, the traditional regulatory and enforcement approaches used by these boards may not be adequate to protect consumers from the potentially dangerous practices of some Internet pharmacies. Nevertheless, 20 states have taken disciplinary action against Internet pharmacies and physicians that have engaged in illegal or unethical practices. Many of these states have also introduced legislation to address illegal or unethical sales practices of Internet pharmacies and physicians prescribing on the Internet. Appendix II contains details on state actions to regulate pharmacies and physicians practicing on the Internet. The advent of Internet pharmacies poses new challenges for the traditional state regulatory agencies that oversee the practices of pharmacies. While 12 pharmacy boards reported that they have taken action against Internet pharmacies for illegally dispensing prescription drugs, many said they have encountered difficulties in identifying, investigating, and taking disciplinary action against illegally operating Internet pharmacies that are located outside state borders but shipping to the state.State pharmacy board actions consisted of referrals to federal agencies, state Attorneys General, or state medical boards. Almost half of the state pharmacy boards reported that they had experienced problems with or received complaints about Internet pharmacies. Specifically, 24 state pharmacy boards told us that they had experienced problems with Internet pharmacies not complying with their state pharmacy laws. The problems most commonly cited were distributing prescription drugs without a valid license or prescription, or without establishing a valid physician/patient relationship. Moreover, 20 state boards (40 percent) reported they had received at least 78 complaints, ranging from 1 to 15 per state, on Internet pharmacy practices. Many of these complaints were about Internet pharmacies that were dispensing medications without a valid prescription or had dispensed the wrong medication. State pharmacy boards also reported that they have encountered difficulties in identifying Internet pharmacies that are located outside their borders. About 74 percent of state pharmacy boards reported having serious problems determining the physical location of an Internet pharmacy affiliated with an Internet Web site. Sixteen percent of state pharmacy boards reported some difficulty, and 10 percent reported no difficulty. Without this information, it is difficult to identify the companies and people responsible for selling prescription drugs. More importantly, state pharmacy boards have limited ability and authority to investigate and act against Internet pharmacies located outside their state but doing business in their state without a valid license. In our survey, many state pharmacy boards cited limited resources, and jurisdictional and technological limitations, as obstacles to enforcing their laws with regard to pharmacies not located in their states. Because of jurisdictional limits, states have found that their traditional investigative tools—interviews, physical or electronic surveillance, and serving subpoenas to produce documents and testimony—are not necessarily adequate to compel disclosure of information from a pharmacy or pharmacist located out of state. Similarly, the traditional enforcement mechanisms available to state pharmacy boards—disciplinary actions or sanctions against licensees—are not necessarily adequate to control a pharmacy or pharmacist located out of state.In the absence of the ability to investigate and take disciplinary action against a nonresident pharmacy, state pharmacy boards have been limited to referring unlicensed or unregistered Internet pharmacies to their counterpart boards in the states where the pharmacies are licensed. State medical boards have concerns about the growing number of Internet pharmacies that issue prescriptions on the basis of a simple online questionnaire rather than a face-to-face examination. The AMA is also concerned that prescriptions are being provided to patients without the benefit of a physical examination, which would allow evaluation of any potential underlying cause of a patient’s dysfunction or disease, as well as an assessment of the most appropriate treatment. Moreover, medical boards are receiving complaints about physicians prescribing on the Internet. Twenty of the 45 medical boards responding to our survey reported that they had received complaints about physicians prescribing on the Internet during the last year.The most frequent complaint was that the physician did not perform an examination of the patient. As a result, medical boards in eight states have taken action against physicians for Internet prescribing violations. Disciplinary actions and sanctions have ranged from monetary fines and letters of reprimand to probation and license suspension. Thirty-nine of the 45 medical boards responding to our survey concluded that a physician who issued a prescription on the basis of a review of an online questionnaire did not satisfy the standard of good medical practice required under their states’ laws. Moreover, ten states have introduced or enacted legislation regarding the sale of prescription drugs on the Internet; including five states that have introduced legislation to prohibit physicians and other practitioners from prescribing prescription drugs on the Internet without conducting an examination or having a prior physician/patient relationship. Twelve states have adopted rules or statements that clarify their positions on the use of online questionnaires for issuing prescriptions. Generally, these statements either prohibit online prescribing or state that prescribing solely on the basis of answers to a questionnaire is inappropriate and unprofessional (see app. II). As in the case of state pharmacy boards, state medical boards have limited ability and authority to investigate and act against physicians located outside of their state but prescribing on the Internet to state residents. Further, they too have had difficulty identifying these physicians. About 55 percent of state medical boards that responded to our survey told us they had difficulty determining both the identity and location of physicians prescribing drugs on the Internet, and 36 percent had difficulty determining whether the physician was licensed in another state. Since February 1999, six state Attorneys General have brought legal action against Internet pharmacies and physicians for providing prescription drugs to consumers in their states without a state license and for issuing prescriptions solely on the basis of information provided in online questionnaires. Most of the Internet pharmacies that were sued voluntarily stopped shipping prescription drugs to consumers in those states. As a result, at least 18 Internet pharmacies have stopped selling prescription drugs to residents in Illinois, Kansas, Michigan, Missouri, New Jersey, and Pennsylvania.Approximately 15 additional states are investigating Internet pharmacies for possible legal action. Investigating and prosecuting online offenders raise new challenges for law enforcement. For instance, Attorneys General also have complained that the lack of identifying information on pharmacy Web sites makes it difficult to identify the companies and people responsible for selling prescription drugs. Moreover, even if a state successfully sues an Internet pharmacy for engaging in illegal or unethical practices, such as prescribing on the basis of an online questionnaire or failing to adequately disclose identifying information, the Internet pharmacy is not prohibited from operating in other states. To stop such practices, each affected state must individually bring action against the Internet pharmacy. As a result, to prevent one Internet pharmacy from doing business nationwide, the Attorney General in every state would have to file a lawsuit in his or her respective state court. Five federal agencies have authority to regulate and enforce U.S. laws that could be applied to the sale of prescription drugs on the Internet. Since Internet pharmacies first began operation in early 1999, FDA, Justice, DEA, Customs, and FTC have increased their efforts to respond to public health concerns about the illegal sale of prescription drugs on the Internet.FDA has taken enforcement actions against Internet pharmacies selling prescription drugs, Justice has prosecuted Internet pharmacies and physicians for dispensing medications without a valid prescription, DEA has investigated Internet pharmacies for illegal distribution of controlled substances, Customs has increased its seizure of packages that contain drugs entering the country, and FTC has negotiated settlements with Internet pharmacies for making deceptive health claims. While these agencies’ contributions are important, their efforts sometimes do not support each other. For instance, to conserve its resources FDA routinely releases packages of prescription drugs that Customs has detained because they may have been obtained illegally from foreign Internet pharmacies. Such uncoordinated program efforts can waste scarce resources, confuse and frustrate enforcement program administrators and customers, and limit the overall effectiveness of federal enforcement efforts. FDA has recently increased its monitoring and investigation of Internet pharmacies to determine if they are involved in illegal sales of prescription drugs. FDA has primary responsibility for regulating the sale, importation, and distribution of prescription drugs, including those sold on the Internet. In July 1999, FDA testified before the Congress that it did not generally regulate the practice of pharmacy or the practice of medicine. Accordingly, FDA activities regarding the sale of drugs over the Internet had until then focused on unapproved drugs. As of April 2000, however, FDA had 54 ongoing investigations of Internet pharmacies that may be illegally selling prescription drugs. FDA has also referred to Justice for possible criminal prosecution approximately 33 cases involving over 100 Internet pharmacies that may be illegally selling prescription drugs. FDA’s criminal investigations of online pharmacies have, to date, resulted in the indictment and/or arrest of eight individuals, two of whom have been convicted. In addition, FDA is seeking $10 million in fiscal year 2001 to fund 77 staff positions that would be dedicated to investigating and taking enforcement actions against Internet pharmacies. Justice has increased its prosecution of Internet pharmacies illegally selling prescription drugs. Under the FDCA, a prescription drug is considered misbranded if it is not dispensed pursuant to a valid prescription under the professional supervision of a licensed practitioner. In July 1999, Justice testified before the Congress that it was examining its legal basis for prosecuting noncompliant Internet pharmacies and violative online prescribing practices. Since that time, according to FDA officials, 22 of the 33 criminal investigations FDA referred to Justice have been actively pursued. Two of the 33 cases were declined by Justice and are being prosecuted as criminal cases by local district attorneys, and 9 were referred to the state of Florida. In addition, Justice filed two cases involving the illegal sale of prescription drugs over the Internet in 1999 and is investigating approximately 20 more cases. Since May 2000, Justice has brought charges against, or obtained convictions of, individuals in three cases involving the sale of prescription drugs by Internet pharmacies without a prescription or the distribution of misbranded drugs. While DEA has no efforts formally dedicated to Internet issues, it has initiated 20 investigations of the use of the Internet for the illegal sale of controlled substances during the last 15 months. DEA has been particularly concerned about Internet pharmacies that are affiliated with physicians who prescribe controlled substances without examining patients. For instance, in July 1999 a DEA investigation led to the indictment of a Maryland doctor on 34 counts of providing controlled substances to patients worldwide in response to requests made over the Internet. Because Maryland requires that doctors examine patients before prescribing medications, the doctor’s prescriptions were not considered to be legitimately provided. The physician’s conduct on the Internet also violated an essential requirement of federal law, which is that controlled substances must be dispensed only with a valid prescription. The U.S. Customs Service, which is responsible for inspecting packages shipped to the United States from foreign countries, has increased its seizures of prescription drugs from overseas. Customs officials report that the number of drug shipments seized increased about 450 percent between 1998 and 1999—from 2,139 to 9,725. Most of these seizures involved controlled substances. Because of the large volume, Customs is able to examine only a fraction of the packages entering the United States daily and cannot determine how many of its drug seizures involve prescription drugs purchased from Internet pharmacies. Nevertheless, Customs officials believe that the Internet is playing a role in the increase in illegal drug importation. According to Customs officials, fiscal year 2000 seizures are on pace to equal or surpass 1999 levels. FTC reports that it is monitoring Internet pharmacies for compliance with the Federal Trade Commission Act, conducting investigations, and making referrals to state and federal authorities. FTC is responsible for combating unfair or deceptive trade practices, including those on the Internet, such as misrepresentation of online pharmacy privacy practices. In 1999, FTC referred two Internet pharmacies to state regulatory boards. This year, FTC charged individuals and Internet pharmacies with making false promotional claims and other violations. Recently, the operators of these Internet pharmacies agreed to settle out of court. According to the settlement agreement, the defendants are barred from misrepresenting medical and pharmaceutical arrangements and any material fact about the scope and nature of the defendants’ goods, services, or facilities. The sale of prescription drugs to U.S. residents by foreign Internet pharmacies poses the most difficult challenge for U.S. law enforcement authorities because the seller is not located within U.S. boundaries. Many prescription drugs available from foreign Internet pharmacies are either products for which there is no U.S.-approved counterpart or foreign versions of FDA-approved drugs. In either case, these drugs are not approved for use in the United States, and therefore it is illegal for a foreign Internet pharmacy to ship these products to the United States. In addition, federal law prohibits the sale of prescription drugs to U.S. citizens without a valid prescription. Although FDA officials said that the agency has jurisdiction over a resident in a foreign country who sells to a U.S. resident in violation of the FDCA, from a practical standpoint, FDA is hard-pressed to enforce U.S. laws against foreign sellers.As a result, FDA enforcement efforts against foreign Internet pharmacies have been limited mostly to requesting the foreign government to take action against the seller of the product. FDA has also posted information on its Web site to help educate consumers about safely purchasing drugs from Internet pharmacies. FDA officials have sent 23 letters to operators of foreign Internet pharmacies warning them that they may be engaged in illegal activities, such as offering to sell prescription drugs to U.S. citizens without a valid, or in some cases without any, prescription. Copies of each letter were sent to regulatory officials in the country in which the pharmacy was based. In response, two Internet pharmacies said they will cease their sales to U.S. residents, and a third said it has ceased its sales regarding one drug but is still evaluating how it will handle other products. FDA has since requested that Customs detain packages from these Internet pharmacies. Customs has been successful in working with one foreign government to shut down its Internet pharmacies that were illegally selling prescription drugs to U.S. consumers. In January 2000, Customs assisted Thailand authorities in the execution of search and arrest warrants against seven Internet pharmacies, resulting in the arrest of 22 Thai citizens for violating Thailand’s drug and export laws and 6 people in the United States accused of buying drugs from the Thailand Internet pharmacy. U.S. and Thailand officials seized more than 2.5 million doses of prescription drugs and 245 parcels ready for shipment to the United States. According to FDA, it is illegal for a foreign-based Internet pharmacy to sell prescription drugs to consumers in the United States if those drugs are unapproved or are not dispensed pursuant to a valid prescription. But FDA permits patients and their physicians to obtain small quantities of drugs sold abroad, but not approved in the United States, for the treatment of a serious condition for which effective treatment may not be available domestically. FDA’s approach has been applied to products that do not represent an unreasonable risk and for which there is no known commercialization or promotion to U.S. residents. Further, a patient seeking to import such a product must provide to FDA the name of the licensed physician in the United States responsible for his or her treatment with the unapproved drug or provide evidence that the product is for continuation of a treatment begun in a foreign country. FDA has acknowledged that its guidance concerning importing prescription drugs through the mail has been inconsistently applied. At many Customs mail centers, FDA personnel rely on Customs officials to detain suspicious drug imports for FDA screening. Although prescription drugs ordered from foreign Internet pharmacies may not meet FDA’s criteria for importation under the personal use exemption, FDA personnel routinely release illegally imported prescription drugs detained by Customs officials. FDA has determined that the use of agency resources to provide comprehensive coverage of illegally imported drugs for personal use is generally not justified. Instead, the agency’s enforcement priorities are focused on drugs intended for the commercial market and on fraudulent products and those that pose an unreasonable health risk. FDA’s inconsistent application of its personal use exemption frustrates Customs officials and does little to deter foreign Internet pharmacies trafficking in prescription drugs. Accordingly, FDA plans to take the necessary actions to eliminate, or at least mitigate to the extent possible, the inconsistent interpretation and application of its guidance and work more closely with Customs. FDA’s approach to regulation of imported prescription drugs could be affected by enactment of pending legislation intended to allow American consumers to import drugs from certain other countries. Specifically, the appropriations bill for FDA (H.R. 4461) includes provisions that could modify the circumstances under which the agency may notify individuals seeking to import drugs into the United States that they may be in violation of federal law. According to an FDA official, it is not currently clear how these provisions, if enacted, could affect FDA’s ability to prevent the importation of violative drugs. Initiatives at the state and federal levels offer several approaches for regulating Internet pharmacies. The organization representing state boards of pharmacy, NABP, has developed a voluntary program for certifying Internet pharmacies. In addition, state and federal officials believe that they need more authority, as well as information regarding the identity of Internet pharmacies, to protect the public’s health. The organization representing state Attorneys General, NAAG, has asked the federal government to expand the authority of its members to allow them to take action in federal court. In addition, the administration has announced a new initiative that would grant FDA broad new authority to better identify, investigate, and prosecute Internet pharmacies for the illegal sale of prescription drugs. Concerned that consumers have no assurance of the legitimacy of Internet pharmacies, NABP is attempting to provide consumers with an instant mechanism for verifying the licensure status of Internet pharmacies. NABP’s Verified Internet Pharmacy Practice Sites (VIPPS) is a voluntary program that certifies online pharmacies that comply with criteria that attempt to combine state licensing requirements with standards developed by NABP for pharmacies practicing on the Internet. To obtain VIPPS certification, an Internet pharmacy must comply with the licensing and inspection requirements of the state where it is physically located and of each state to which it dispenses pharmaceuticals; demonstrate compliance with 17 standards by, for example, ensuring patient rights to privacy, authenticating and maintaining the security of prescription orders, adhering to recognized quality assurance policy, and providing meaningful consultation between customers and pharmacists; undergo an on-site inspection; develop a postcertification quality assurance program; and submit to continuing random inspections throughout a 3-year certification period. VIPPS-certified pharmacies are identified by the VIPPS hyperlink seal displayed on both their and NABP’s Web sites.Since VIPPS began in the fall of 1999, its seals have been presented to 11 Internet pharmacies, and 25 Internet pharmacies have submitted applications to display the seal. NAAG strongly supports the VIPPS program but maintains that the most important tool the federal government can give the states is nationwide injunctive relief. Modeled on the federal telemarketing statute, nationwide injunctive relief is an approach that would allow state Attorneys General to take action in federal court; if they were successful, an Internet pharmacy would be prevented from illegally selling prescription drugs nationwide. Two federal proposals would amend the FDCA to require an Internet pharmacy engaged in interstate commerce to include certain identifying language on its Web site. The Internet Pharmacy Consumer Protection Act (H.R. 2763) would amend the FDCA to require an Internet pharmacy engaged in interstate commerce to include a page on its Web site providing the following information: the name, address, and telephone number of the pharmacy’s principal each state in which the pharmacy is authorized by law to dispense the name of each pharmacist and the state(s) in which the individual is if the site offers to provide prescriptions after medical consultation, the name of each prescriber, the state(s) in which the prescriber is licensed, and the health professions in which the individual holds such licenses. Also, under this act a state would have primary enforcement responsibility for any violation involving the purchase of a prescription drug made within the state, provided the state had requirements at least as stringent as those specified in the act and adequate procedures for enforcing those requirements. In addition, the administration has developed a bill aimed at providing consumers the protections they enjoy when they go to a drugstore to have their prescriptions filled. For example, when consumers walk into a drugstore to have a prescription filled, they know the identity and location of the pharmacy, and the license on the wall provides visual assurance that the pharmacy meets certain health and safety requirements in that state. Under the Internet Prescription Drug Sales Act of 2000, Internet pharmacies would be required to be licensed in each state where they do business; comply with all applicable state and federal requirements, including the requirement to dispense drugs only pursuant to a valid prescription; and disclose identifying information to consumers. Internet pharmacies also would be required to notify FDA and all applicable state boards of pharmacy prior to launching a new Web site.Internet pharmacies that met all of the requirements would be able to post on their Web site a declaration that they had made the required notifications. FDA would designate one or more private nonprofit organizations or state agencies to verify licensing information included in notifications and to examine and inspect the records and facilities of Internet pharmacies. Internet pharmacies that do not meet notification and disclosure requirements or that sell prescription drugs without a valid prescription could face penalties as high as $500,000 for each violation. While it supports the Internet Prescription Drug Sales Act of 2000, Justice officials have recommended that it be modified. Prescription drug sales from Internet pharmacies often rely on credit card transactions processed by U.S. banks and credit card networks. To enhance its ability to investigate and stop payment for prescription drugs purchased illegally, Justice has recommended that federal law be amended to permit the Attorney General to seek injunctions against certain financial transactions traceable to unlawful online drug sales. According to Justice officials, if the Department and financial institutions can stop even some of the credit card orders for the illicit sale of prescription drugs and controlled substances, the operations of some “rogue” Internet pharmacies may be disrupted significantly. The unique qualities of the Internet pose new challenges for enforcing state pharmacy and medical practice laws because they allow pharmacies and physicians to reach consumers across state and international borders and remain anonymous. Internet pharmacies that fail to obtain licensure in the states where they operate may violate state law. But the Internet pharmacies that are affiliated with physicians that prescribe on the basis of an online questionnaire and those that dispense drugs without a prescription pose the most potential harm to consumers. Dispensing prescription drugs without adequate physician supervision increases the risk of consumers’ suffering adverse events, including side effects from inappropriately prescribed medications and misbranded or contaminated drugs. Some states have taken action to stop Internet pharmacies that offer online prescribing services from selling prescription drugs to residents of their state. But the real difficulty lies in identifying responsible parties and enforcing laws across state boundaries. Enforcement actions by federal agencies have begun addressing the illegal prescribing and dispensing of prescription drugs by domestic Internet pharmacies and their affiliated physicians. Enactment of federal legislation requiring Internet pharmacies to disclose, at a minimum, who they are, where they are licensed, and how they will secure personal health information of consumers would assist state and federal authorities in enforcing existing laws. In addition, federal agencies have taken actions to address the illegal sale of prescription drugs from foreign Internet pharmacies. Cooperative efforts between federal agencies and a foreign government resulted in closing down some Internet pharmacies illegally selling prescription drugs to U.S. consumers. However, it is unclear whether these efforts will stem the flow of prescription drugs obtained illegally from other foreign sources. As a result, the sale of prescription drugs from foreign-based Internet pharmacies continues to pose difficulties for federal regulatory authorities. To help ensure that consumers and state and federal regulators can easily identify the operators of Web sites selling prescription drugs, the Congress should amend the FDCA to require that any pharmacy shipping prescription drugs to another state disclose certain information on its Internet site. The information disclosed should include the name, business address, and telephone number of the Internet pharmacy and its principal officers or owners, and the state(s) where the pharmacy is licensed to do business. In addition, where permissible by state law, Internet pharmacies that offer online prescribing services should also disclose the name, business address, and telephone number of each physician providing prescribing services, and the state(s) where the physician is licensed to practice medicine. The Internet Pharmacy Consumer Protection Act and the administration’s proposal would require Internet pharmacies to disclose this type of information. We obtained comments on a draft of this report, from FDA, Justice, FTC, and Customs, as well as NABP and FSMB. In general, they agreed that Internet pharmacies should be required to disclose pertinent information on their Web sites and thought that our report provided an informative summary of efforts to regulate Internet pharmacies. Some reviewers also provided technical comments, which we incorporated where appropriate. However, FDA suggested that our matter for consideration implied that online questionnaires were acceptable as long as the physician’s name was properly disclosed. We did not intend to imply that online prescribing was proper medical practice. Rather, our report notes that most state medical boards responding to our survey have already concluded that a physician who issues a prescription on the basis of a review of an online questionnaire has not satisfied the standard of good medical practice required by state law. In light of this, federal action does not appear necessary. The disclosure of the responsible parties should assist state regulatory bodies in enforcing their laws. FTC suggested that our matter for congressional consideration be expanded to recommend that the Congress grant states nationwide injunctive relief. Our report already discusses NAAG’s proposal that injunctive relief be modeled after the federal telemarketing statute. While the NAAG proposal may have some merit, an assessment of the implications of this proposal was beyond the scope of our study. FTC also recommended that the Congress enact federal legislation that would require consumer-oriented commercial Web sites that collect personal identifying information from or about consumers online, including Internet pharmacies, to comply with widely accepted fair information practices. Again, our study did not evaluate whether a federal consumer protection law was necessary or if existing state laws and regulations may already offer this type of consumer protection. NABP did not agree entirely with our assessment of the regulatory effectiveness of the state boards of pharmacy. It indicated that the boards, with additional funding and minor legislative changes, can regulate Internet pharmacies. Our study did not assess the regulatory effectiveness of individual state pharmacy boards. Instead, we summarized responses by state pharmacy boards to our questions about their efforts to identify and take action against Internet pharmacies that are not complying with state law, and the challenges they face in regulating these pharmacies. Our report notes that many states identified limited resources and jurisdictional limitations as obstacles to enforcing their laws. NABP also suggested that our matter for congressional consideration include a requirement for independent verification of the information that Internet pharmacies are required to disclose on their Web sites. In our view, the current state regulatory framework would permit state boards to verify this information should they choose to do so. We are sending copies of this report to the Honorable Donna E. Shalala, Secretary of Health and Human Services; the Honorable Jane E. Henney, Commissioner of FDA; the Honorable Janet Reno, Attorney General; the Honorable Donnie R. Marshall, Administrator of the DEA; the Honorable Robert Pitofsky, Chairman of the FTC; the Honorable Raymond W. Kelly, Commissioner of the U.S. Customs Service; the Honorable Kenneth C. Weaver, Chief Postal Inspector; appropriate congressional committees; and other interested parties. We will make copies available to others upon request. If you or your staffs have any questions about this report or would like additional information, please call me at (202) 512-7119 or John Hansen at (202) 512-7105. See appendix V for another GAO contact and staff acknowledgments. To obtain information on the number of pharmacies practicing on the Internet, we conducted searches of the World Wide Web and obtained a list of 235 Internet pharmacies that the National Association of Boards of Pharmacy (NABP) had identified by searching the Web and a list of 94 Internet pharmacies identified by staff of the House Committee on Commerce by searching the Web. After eliminating duplicate Web sites, we reviewed 296 potential sites between November and December 1999. Sites needed to meet two criteria to be included in our survey. First, they had to sell prescription drugs directly to consumers. Second, they had to be anchor sites (actual providers of services) and not portal sites (independent Web pages that connect to a provider). Most portal sites are paid a commission by anchor sites for displaying an advertisement or taking the user to the service provider’s site through a “click through.” We excluded 129 Web sites from our survey because they did not meet these criteria. See table 2 for details on our analysis of the Web sites that we excluded. In April 2000, we obtained a list of 326 Web sites that FDA identified during March 2000. We reviewed all the sites on FDA’s list and compared it to the list of Internet pharmacies we had previously compiled. We found 117 Internet pharmacies that duplicated pharmacies on our list. We also excluded 186 Web sites that did not meet our two criteria and added the remaining 23 Internet pharmacies to our list. To categorize Internet pharmacies, we analyzed information on the Web site to determine if the Internet pharmacy (1) required a prescription from the user’s physician to dispense a prescription drug, (2) in the absence of a prescription, required the user to complete an online questionnaire to obtain a prescription, or (3) dispensed prescription drugs without a prescription. We also collected data on the types of information available on each Internet pharmacy Web site, including information about the pharmacy’s licensure status, its mailing address and telephone number, and the cost of issuing a prescription. Using the domain name from the uniform resource locator, we performed online queries of Network Solutions, Inc. (one of the primary registrars for domain names) to obtain the name, address, and telephone number of the registrant of each Internet pharmacy. We then telephoned all U.S.-based Internet pharmacies to obtain information on the states in which they dispensed prescription drugs and the states in which they were licensed or registered. See table 3 for details on our licensure information inquiry. Finally, we clustered Internet pharmacies by state and asked the pharmacy boards in the 12 states—10 of these had the largest number of licensed/registered Internet pharmacies—to verify the licensure status of each pharmacy that told us it was licensed in the state. To assess state efforts to regulate Internet pharmacies and physicians prescribing over the Internet, we conducted two mail surveys in December 1999. To obtain information on state efforts to identify, monitor, and regulate Internet pharmacies, we surveyed pharmacy boards in all 50 states and the District Columbia. After making follow-up telephone calls, we received 50 surveys from the pharmacy boards in 49 states and the District Columbia, or 98 percent of those we surveyed. The survey and survey results are presented in appendix III. We also interviewed the executive directors and representatives of the state pharmacy boards in nine states— Alabama, Iowa, Maryland, New York, North Dakota, Oregon, Texas, Virginia, Washington—and the District of Columbia. In addition, we interviewed and obtained information from representatives of the NABP, the American Pharmaceutical Association, the National Association of Attorneys General, pharmaceutical manufacturers, as well as representatives of several Internet pharmacies. To obtain information on state efforts to oversee physician prescribing practices on the Internet, we surveyed the 62 medical boards and boards of osteopathy in the 50 states and the District of Columbia.After follow-up telephone calls, we received 45 surveys from the medical boards in 39 states, or 73 percent of those we surveyed. The survey and survey results are presented in appendix IV. We also interviewed officials with the medical boards in five states: California, Colorado, Maryland, Virginia, and Wisconsin. In addition, we interviewed and obtained information from representatives of the American Medical Association and the Federation of State Medical Boards (FSMB). To assess federal efforts to oversee pharmacies and physicians practicing on the Internet, we obtained information from officials from the Food and Drug Administration; the Federal Trade Commission; the Department of Justice, including the Drug Enforcement Administration; the U.S. Customs Service; and the U.S. Postal Service. We also reviewed the report of the President’s Working Group on Unlawful Conduct on the Internet. The availability of prescription drugs on the Internet has attracted the attention of several professional associations. As a result, over the past year, several associations have convened meetings of representatives of professional, regulatory, law enforcement, and private sector entities to discuss issues related to the practice of pharmacy and medicine on the Internet. We attended the May 1999 NABP annual conference, its September 1999 Executive Board meeting, and its November 1999 Internet Healthcare Summit 2000 to obtain information on the regulatory landscape for Internet pharmacy practice sites and the Verified Internet Pharmacy Practice Sites program. In January 2000, we attended a meeting convened by the FSMB of top officials from various government, medical, and public entities to discuss the efforts of state and federal agencies to regulate pharmacies and physicians practicing on the Internet. We also attended sessions of the March 2000 Symposium on Healthcare Internet and E- Commerce and the April 2000 Drug Information Association. We conducted our work from May 1999 through September 2000 in accordance with generally accepted government auditing standards. Neither in-state nor out-of-state physicians may prescribe to state residents without meeting the patient, even if the patient completes an online questionnaire. Internet exchange does not qualify as an initial medical examination, and no legitimate patient/physician relationship is established by it. Physicians prescribing a specific drug to residents without being licensed in the state may be criminally liable. Physicians prescribing on the Internet must follow standards of care. AG filed suit against four out-of-state online pharmacies for selling, prescribing, dispensing, and delivering prescription drugs without the pharmacies or physicians being licensed and with no physical examination. Referred one physician to the medical board in another state and obtained an injunction against a physician; the Kansas Board of Healing Arts also filed a lawsuit against a physician for the unauthorized practice of medicine. AG filed lawsuits against 10 online pharmacies and obtained restraining orders against the companies to stop them from doing business in Kansas; filed lawsuits against 7 companies and individuals selling prescription drugs over the Internet. Dispensing medication without physical examination represents conduct that is inconsistent with the prevailing and usually accepted standards of care and may be indicative of professional or medical incompetence. AG filed notices of intended action against 10 Internet pharmacies for illegally dispensing prescription drugs. Referred Internet pharmacy(ies) to AG for possible criminal prosecution. AG filed suit and obtained permanent injunctions against two online pharmacies and physicians for practicing without state licenses. Interviewed two physicians and suggested they stop prescribing over the Internet; they complied. AG filed suits charging nine Internet pharmacies with consumer fraud violations for selling prescription drugs over the Internet without a state license. Adopted regulations prohibiting physicians from prescribing or dispensing controlled substances or dangerous drugs to patients they have not examined and diagnosed in person; pharmacy board adopted rules for the sale of drugs online, requiring licensure or registration of pharmacy and disclosure. An Ohio doctor was indicted on 64 felony counts of selling dangerous drugs and drug trafficking over the Internet. The Medical Board may have his license revoked. AG filed lawsuits against three online companies and various pharmacies and physicians for practicing without proper licensing. The following individuals made important contributions to this report: John C. Hansen directed the work; Claude B. Hayeck collected information on federal efforts and, along with Darryl Joyce, surveyed state pharmacy boards; Renalyn A. Cuadro assisted in the surveys of Internet pharmacies and state medical boards; Susan Lawes guided survey development; Joan Vogel compiled and analyzed state pharmacy and medical board survey data; and Julian Klazkin served as attorney adviser. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
The first Internet pharmacies began online service in early 1999. Public health officials are concerned about Internet pharmacies that do not adhere to state licensing requirements and standards. Public officials are also concerned about the validity of prescriptions and international drugs that are not approved in the United States being sent by mail. The unique qualities of the Internet pose new challenges for enforcing state pharmacy and medical practice laws because they allow pharmacies and physicians to reach consumers across state and international borders and remain anonymous. Congress is considering legislation to strengthen oversight of Internet pharmacies.
SSI provides monthly cash benefits to qualified aged, blind, and disabled persons. Because it is a program based on need, monthly changes in the amount of non-SSI income that clients receive can increase or decrease the amount of SSI benefits to which they are entitled or make them completely ineligible for benefits. Resources, including financial accounts, that exceed $2,000 for an individual or $3,000 for a couple make that individual or couple ineligible for the program. To minimize occurrences of over- and underpayments, the program requires clients to promptly report to SSA any fluctuations in their income or assets. As part of the application process, SSI clients are required to disclose all of their income and resources to SSA field staff who process their applications. SSA policy requires that field staff obtain documentation to verify the amount of income and resources that applicants report. It does not require, however, that field staff check for unreported income and resources unless they suspect that applicants are not fully disclosing them. Thus, at the time of application, SSA normally relies on applicants to portray their financial situations accurately. To ensure that newly eligible recipients have accurately portrayed their financial condition and that ongoing recipients continue to do so, SSA uses both financial eligibility reviews, known as redeterminations, and computer matching to verify income and resource levels. During redeterminations, recipients report their income on mail-in questionnaires or in face-to-face or telephone interviews. The method used to contact the client and the frequency of such contacts depend on the likelihood that a client’s financial situation will change. Computer matches, which compare the individual’s SSI record against data obtained from federal and state agencies, enable SSA to detect some types of income and resources that clients have not reported. The computer matching process to detect undisclosed income compares earnings income reported by clients to the earnings information contained on IRS form W-2s, which employers must file annually with SSA. SSA conducts this match annually. The W-2 match is supplemented twice a year with quarterly earnings information provided by 45 states and the District of Columbia. To do this, SSA sends computer tapes or cartridges containing the names of SSI recipients to each state. The states in turn append to the bottom of these tapes any earnings information pertaining to the SSI recipients residing in their states and then mail the tapes back to SSA. Once SSA receives the tapes, it matches them against the agency’s own records to determine if recipients have disclosed all of their earnings to the agency. In order to detect unreported financial accounts, information reported by clients is compared to IRS form 1099s, which are filed annually by financial institutions and contain the amount of interest earned on financial accounts. Because form 1099 data only contain interest accrued on financial accounts, this match can detect only interest-bearing accounts. Each September, SSA conducts this match using data from the previous year, which covers most SSI recipients from that year. A primary cause of SSI overpayments has been that clients do not always disclose their earnings and financial accounts when they apply for benefits or once they are receiving such payments. For example, SSA’s fiscal year 1996 payment accuracy study shows that out of a total of $1.6 billion in overpayments, approximately 40 percent (nearly $647.6 million) was the result of nondisclosed earnings and financial accounts. About $379.5 million of these overpayments occurred because SSI clients did not disclose their earnings and $268.1 million occurred because SSI clients did not disclose their financial accounts. Many of these overpayments could have been prevented or more quickly detected if more timely and comprehensive information on the earnings and financial accounts of SSI clients had been available. SSA conducts a second annual payment accuracy study, which contains more detailed information on the amount of overpayments in the SSI program. However, neither study accurately estimates for the entire SSI population the amount of overpayments made because of nondisclosure at the time of application versus the amount made because of nondisclosure after clients began receiving benefits. Regardless, both studies have consistently shown over the years that hundreds of millions of dollars in overpayments occur at both of these junctures. Such findings, in turn, indicate that the agency needs to address systemwide weaknesses in both the application and post-entitlement procedures it uses to determine program eligibility and payment amount. Of the hundreds of millions of dollars in overpayments that have been made, SSA has gotten little of it back. SSA statistics show that, on average, the agency recovers only about 15 percent of all outstanding overpayments. The older the overpayment, the more difficult it is to recover. Moreover, when an individual is removed from SSI’s rolls—which can happen when an overpayment is the result of a nondisclosed financial account—the overpayment will probably never be recovered because the individual no longer receives a monthly SSI benefit payment from which SSA can withhold funds. SSA’s overpayment recovery rate is low partly because SSI recipients are poor and do not have the funds to repay this debt. SSA’s present data sources and procedures for detecting undisclosed earnings do not provide up-to-date and comprehensive information on the earnings of applicants and recipients. Such information is critical because earnings are a primary factor in determining both initial program eligibility and the amount of benefits recipients should receive each month. SSA could obtain such information by using new data sources on earnings and by enhancing its current computer-matching procedures. SSA uses data that are outdated and do not reflect the current earnings status of SSI clients. When individuals apply for SSI benefits, SSA field staff are required to check the agency’s database that contains IRS form W-2 information to verify that applicants have accurately portrayed their work histories. How current this information is depends on when SSA enters an applicant’s W-2 data and when the applicant comes in to apply for SSI benefits. For example, SSA began entering 1996 W-2 earnings information into its database in February 1997, shortly after it was reported by employers. By April, SSA had entered about 45 percent of the 1996 W-2s into its database, and by September, the agency had entered 98.5 percent of the earnings information. Thus, in April 1997, there was about a 45-percent chance that the 1996 earnings of an SSI applicant would be recorded in SSA’s database. Earnings from December 1996 would be 4 months old, and earnings from January 1996 would be 16 months old. If an application were made in April 1998 and the individual’s 1997 W-2 information had not yet been entered, the only earnings information available to SSA field staff would be 1996 W-2 information, which would then be 15 to 27 months out of date. After clients have begun receiving SSI payments, SSA checks for undisclosed earnings in two computer matches: a semiannual match that uses quarterly earnings data that employers file with the states, and another that uses the annual W-2 information. Because of the age of the data used in the match, these matches can detect only undisclosed earnings that were received 6 to 21 months in the past. For example, state quarterly data sent to SSA in March 1998 covers earnings through the quarter ending September 1997. If a client had earnings as of September 1997, the March 1998 match would only detect those earnings 6 months after they were received. Similarly, if the client’s last earnings occurred as early as July 1997, the March 1998 match would not have detected them until 9 months after they were earned. Because state earnings information is not provided by all states, SSA supplements this match by conducting a computer match each September using W-2 information from the previous year. Thus, if a recipient had earnings in December of the previous year, they would not be discovered for 9 months, and if a recipient had earnings in January of the previous year, this match would not detect them for 21 months. Although the state computer matches provide more current information than the W-2 data match, state data are not always as comprehensive as the W-2 data. First, states provide earnings information to SSA only for current SSI recipients, so this information cannot be used to verify the earnings reported by new SSI applicants. Also, only 45 states and the District of Columbia have agreed to provide SSA with this information. Finally, the process of conducting the match can be unwieldy, and SSA is often not able to complete the match for all participating states. To perform matches, SSA must prepare and send computer tapes or cartridges containing the names of SSI recipients in each state to all participating states. The states in turn provide earnings information, if any, on these recipients to SSA. Often, however, the tapes get lost or damaged in the mail, or the state prepares the data in a format that SSA’s computers cannot read. For example, in the first half of 1997, SSA was able to complete the match for only 37 of the 45 states that had agreed to provide the agency with this information. The process of SSA and the states exchanging computer tapes is so cumbersome that even though the states have new information four times a year, SSA only attempts to get it twice a year. New data sources exist that could help improve earnings verification. The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA) requires that states report the names of newly hired employees as well as all of the quarterly earnings information reported for individuals working in their states to OCSE. OCSE uses this information to identify those parents who could make child support payments. SSA, which helped OCSE develop these databases, is responsible for housing and maintaining them at its National Computer Center and has authority under PRWORA to use the information contained in them. One of these data sources, the New Hire Data Base, identifies newly hired employees and their employers within a month of their hiring. The second database, the Quarterly Wage Data Base, will offer quarterly earnings information that is between 4 and 6 months old. The quarterly information that the states will submit for OCSE’s use is the same data that the majority of states now submit to SSA via computer tapes or cartridges. However, the national OCSE database will be a more comprehensive and current information source because all states are required to participate; it will also contain all employees—not just those on the SSI rolls at the time of the match. It will allow SSA to check for earnings anywhere in the country for both applicants and recipients, and SSA will receive data quarterly instead of only biannually. The New Hire Data Base has been operating since October 1997, and as of mid-February 1998, it has received 13.7 million new-hire records, with all states except one transmitting this information electronically over SSA’s dedicated, secure network. State employees are already submitting queries to their state directories of new hires and are finding that this has helped them to accurately calculate eligibility and payment amounts for state programs. If SSA sets up its own queries to the New Hire Data Base, the agency could use the improved information to reduce the number of overpayments resulting from the nondisclosure of earnings. States began submitting earnings to the Quarterly Wage Data Base on February 1, 1998. It may be possible to set up both of these databases to receive and respond to requests for information so that SSA field staff could check for undisclosed earnings at the time of application. This would prevent many overpayments that are the result of nondisclosure at the application stage, since these databases will contain earnings information that is between 1 and 6 months old. Such data are often current enough to contain earnings information for the same time period in which benefits are received by many newly eligible recipients. This is because it takes, on average, more than 3-1/2 months for a decision on an SSI disability claim, and newly eligible recipients receive benefits retroactively back to the date when they first applied. SSA could also reduce the number and duration of earnings overpayments to ongoing recipients by using the new earnings data in a more fully automated computer matching process. According to officials from SSA and OCSE, the agency could develop an electronic interface with the New Hire Data Base that retains a continually updated list of SSI recipients and notifies SSA automatically whenever a new hire record is reported for one of those recipients. SSA field staff could then contact the employer listed in this database to verify the applicant’s employment and the amount of his or her earnings. In addition, quarterly matching of the SSI rolls against the much larger Quarterly Wage Data Base could be used to detect undisclosed earnings. An automated match could be done at SSA’s National Computer Center, eliminating the need for SSA and the states to exchange computer tapes. Under both the current computer matching procedures and these new procedures, field offices would still investigate undisclosed earnings before reducing a recipient’s benefits, declaring an overpayment, or both. Recipients, therefore, would still have an opportunity to contest earnings that may not belong to them or that fall under program rules permitting the exclusion of certain income. SSA policy officials acknowledge that more current and comprehensive sources of earnings data exist. According to these officials, SSA is focusing its efforts on developing a comprehensive policy that details which new data sources are the best to use in all of its programs and how they can be used most effectively. Even though the New Hire Data Base is available for immediate use and the Quarterly Wage Data Base will be available in April 1998, SSA is putting minimal effort into incorporating these two databases into its claims handling processes because it will take 1 year for the OCSE databases to contain enough earnings information to be useful for title II programs. However, OCSE databases would be immediately useful to reduce SSI overpayments. In fiscal year 1996, more than twice as many overpayments were made in the SSI program as in the social security retirement program, even though SSI payments were only about one-tenth the size of the retirement program payments. By not using the OCSE databases at this time, serious problems with SSI payment accuracy may continue. In the interim, SSA is focusing on developing access to two alternative data sources: a Department of Labor (DOL) network of states’ earnings databases and state-agency maintained databases. The DOL network allows government employees in one state to check on-line for earnings in the earnings databases maintained by any or all of the other states in the network. At the time of our review, 33 states’ earnings databases were linked to the network, and there are plans to add the earnings databases of 7 other states to this network in the near future. SSA is also pursuing direct access to state agency databases on earnings; government benefits; and vital statistics information on births, deaths, and marriages. In an earlier report, we recommended that SSA pursue direct access to state data to improve SSI payment accuracy and program administration. While we continue to recommend direct access to state data as the best approach for obtaining information such as state welfare payments and vital statistics information, especially when national databases do not exist, we consider the OCSE databases better sources for earnings information. SSA’s approach to obtaining earnings information from the DOL network and states has several shortcomings. First, the agency must negotiate and thereafter renegotiate separate data-sharing agreements with each state.According to SSA officials, these tasks are both difficult and time-consuming. In the last several years, SSA has been actively seeking data-sharing agreements with states, but as of November 1997, only nine had agreed to provide direct access to their earnings data. Second, these alternative sources will not necessarily provide SSA with nationwide earnings information, which is essential for detecting the undisclosed earnings of clients who work in one or more states and apply for benefits in yet another. States are not required to participate in the DOL network or to grant SSA employees direct access to their data. Therefore, neither of these two alternative data sources ensures nationwide coverage. Further, of the nine states that have granted direct access to SSA field staff, none have done so for SSA staff located in other states. In contrast, all states are required by law to provide data to the New Hire Data Base and the Quarterly Wage Data Base. Moreover, because PRWORA gives SSA the authority to use these two databases in the administration of its programs, negotiation and renegotiation of data-sharing agreements with the states will not be necessary. Third, and perhaps most significant, neither the DOL network nor direct access to state earnings data will provide employment information as current as that in the New Hire Data Base. The New Hire Data Base will allow SSA to determine within a period of weeks that an SSI recipient has taken a job, instead of waiting a minimum of 4 months, which generally is the delay for data obtained through on-line access or the DOL network. As is the case with earnings, SSA’s present data sources and procedures for detecting undisclosed financial accounts do not provide up-to-date and comprehensive information on the accounts of applicants and recipients. Because undisclosed financial accounts are a major source of overpayments, obtaining such information is critical to ensuring program integrity. Detection of such accounts, both at the application stage and once recipients are on SSI’s rolls, would prevent many of these overpayments and reduce the number and duration of others. Such detection may be possible because it is now technologically feasible for SSA to electronically obtain account information directly from the financial industry. SSA’s current approach to identifying financial accounts can result in ineligible individuals getting on SSI’s rolls and remaining there for long periods of time. During the application process, SSA policy requires that field staff contact banks to verify the amount of money in the accounts of applicants who state that their accounts exceed $1,250. However, when applicants state that they either do not have accounts or that their accounts are below the $1,250 threshold, such verification is generally not required. Once applicants are placed on SSI’s rolls, SSA checks for both unreported and underreported financial accounts through a computer match using IRS form 1099 data. Computer matches using IRS form 1099 data can take months or even years to detect unreported bank accounts. These matches compare the financial account information reported to the IRS by financial institutions with the information concerning financial accounts reported by SSI recipients. SSA conducts this computer match every September. However, at the time of this match, the IRS 1099 data are between 9 and 21 months old. For example, tax year 1997 data will not be available for use in these matches until September 1998. Thus, if an SSI recipient acquired an account in December 1997 that caused the recipient’s assets to exceed SSI’s resource limit, SSA would not be able to detect it for at least 9 months. If the account was acquired in January 1997, the recipient could have received monthly SSI payments to which he or she was not entitled for 21 months before the IRS 1099 match could detect the overpayment. In addition, if the account did not earn interest, this match would not detect it at all, since 1099 data only pertain to interest-bearing accounts. SSA field staff are required to verify the amount of financial accounts over $1,250 that applicants and recipients disclose as well as those that are detected through the IRS 1099 computer match. This is done by submitting to the designated financial institution a paper request for verification of the account balance for all months during which the individual was receiving SSI payments. SSA submits about 1 million requests to financial institutions each year. Financial institution staff, in turn, manually search their records and mail a response, along with an invoice for this service, back to the requesting SSA field staff. Because this is very time-consuming for the financial institutions, they may charge up to $25, and some may not respond at all. The telecommunication network linking the financial industry together nationwide allows financial institutions to transfer funds among themselves and provide customer services such as automated bill payment and automated teller machines (ATM). SSA first began to use these networks to deposit benefit payments directly into the accounts of SSI recipients. Over the past few years, the agency has expanded its use of these networks to more fully automate direct deposits. For example, these networks are now used to notify a specific financial institution of the death of a customer who had a direct-deposit account for SSA program benefits. These networks are also used to set up direct-deposit accounts automatically for newly eligible recipients of SSA program benefits, eliminating the need for the recipient to contact the financial institution to set up an account. According to various experts—such as officials from SSA and the Department of the Treasury, financial industry executives, and network providers—it may be possible to further expand the use of these networks to enable SSA to contact financial institutions electronically and determine whether SSI clients have accounts that they have not disclosed as well as verify the amount in accounts that clients have disclosed. Detecting undisclosed accounts when individuals apply for SSI benefits would prevent ineligible applicants from being placed on SSI’s rolls. In addition to preventing significant overpayments, the agency would save the costs associated with processing invalid claims and determining medical and vocational disability for ineligible applicants. According to financial institution officials we spoke with, handling requests electronically would also be less costly and easier for them than the current paper-based system and would provide the information to SSA more quickly than the current system allows. Many of these experts also pointed out that financial industry networks could be used to verify account information for both SSI applicants and recipients. To identify accounts undisclosed by an applicant, SSA field staff could submit a query to financial institutions with the name, social security number, and other identifying information of the applicant over one or more of the networks. These institutions could then electronically provide applicant account information, including balances, if any. To identify current SSI recipients who have failed to disclose accounts, SSA could use the networks to periodically transmit a file of current SSI recipients to financial institutions selected according to criteria specified in computer profiles. Most financial institutions’ computer systems have the capability to automatically check their files on account holders to see if there are any matches with the SSI recipient list. If matches are found, the system would send an electronic response to SSA containing the recipients’ names and account balances. Financial industry data would be much more current than the data used in the IRS form 1099 match because financial institutions maintain up-to-date records of their customers’ accounts. This does not mean that an overpayment could be detected as soon as an undisclosed account came into existence because the earliest point of detection would depend on how frequently SSA conducted matches using these data. It does mean, however, that SSA, working with the financial institution industry, could design a system that optimally balanced how frequently undisclosed accounts were detected with the cost-effectiveness of such a procedure. It also means that SSA could identify undisclosed accounts much earlier than it currently does and thereby prevent many overpayments made as a result of nondisclosure. For SSA to detect undisclosed accounts most effectively, every applicant and recipient would have to be checked against the records of every financial institution in the country. The extent to which complete coverage could be achieved would depend upon technological capabilities. According to executives who manage the financial industry networks, current technology is sufficient to permit very broad-based checks for applicants, with minimal cost and effort. Financial institutions and human services departments in some states are already exploring ways in which technology can more efficiently provide the required information on the financial accounts of welfare clients. This is occurring in part because PRWORA requires financial institutions to report this information to the states for child support enforcement purposes. In its 1997 business plan, SSA acknowledged that it intends to look into expanding its use of these networks to check for undisclosed accounts, but the agency has yet to put together a proposal detailing when and how it will undertake such a study. SSA already has a telecommunication link to the financial industry network and routinely uses that network to transmit information to financial institutions. However, programming would be needed for SSA to transmit requests for information and for financial institutions to notify the agency that it has an account holder who is an SSI applicant or recipient. SSA officials and executives from the financial industry with whom we spoke agreed that using the financial institution network to verify financial accounts is technically feasible but would require effort to implement. Because financial institutions use various types of computer operating systems and software, each institution would have to create, test, and implement programming specific to its system. States and the financial industry share concerns about privacy and security. Privacy concerns center around ensuring that personal information provided by an individual to a government agency or private institution is protected from being disclosed to those who do not have a legal right to it. Concerns about security center around having adequate computer security controls to ensure physical security and prevent inappropriate access. SSA is required by law to take certain steps to ensure the privacy and security of data, whether that information is internal to SSA or is shared with other entities. These steps include developing a security plan, audit trails, automated alerts to prevent inappropriate requests for personal information, personal identification numbers and passwords, training, disaster recovery plans, and periodic internal and external evaluations of all privacy and security measures. An assessment of whether to institute additional measures may also be needed. The two OCSE earnings databases, as well as data from the financial institution industry, would provide SSA with information needed to prevent or reduce overpayments resulting from undisclosed earnings and financial accounts. This information would be particularly valuable in processing applications because, for the first time, the agency would be able to verify with more current and comprehensive information the financial allegations of applicants before initiating payments to them. Preventing overpayments or detecting them more quickly would bolster the integrity of the SSI program by more effectively ensuring that clients are receiving only those benefits to which they are entitled. We estimated that approximately $647.6 million of the overpayments that occurred in fiscal year 1996 could have been avoided or more quickly detected if these data had been available for SSA to use both in the application process and at intervals after clients were on SSI’s rolls. SSA has authority to use the OCSE databases for the administration of its programs and is responsible for housing and maintaining them at its National Computer Center. The agency has not, however, directed adequate resources to developing computerized interfaces so that these data could be used in the SSI program. The agency also has authority to verify information on the financial accounts of clients from the financial industry but has not yet investigated the technical and economic feasibility of obtaining this information via computer to make it an effective verification tool. Such a system may be economically feasible, even though it would result in SSA verifying more financial accounts than they currently do. According to financial industry experts, computerized verification requests would cost much less than the financial institutions’ current charge for such requests—which can be as much as $25 per request. Moreover, if SSA were able to obtain financial account information free of charge, as is the case for most states, this system would be even more cost-effective. We recommend that the Commissioner of SSA take the following actions: Develop computerized interfaces necessary to access OCSE’s New Hire Data Base and Quarterly Wage Data Base, and use them in accordance with applicable security and privacy laws and regulations to detect undisclosed earnings during initial and subsequent determinations of eligibility for the SSI program. Study the feasibility of obtaining computerized information from financial institutions to detect financial accounts that SSI clients do not report during the application process and during subsequent determinations of eligibility. Such a study should include a comparison of the cost of obtaining and using such information and the program savings achievable as a result of that use. Security and confidentiality issues should also be addressed. In commenting on a draft of this report, SSA agreed that the two OCSE databases can be useful tools in reducing SSI overpayments and stated that they intend to begin using them by October 1, 1998. The agency objected, however, to our characterization that it is putting minimal effort into incorporating these databases into the verification process. At the time of our review, SSA was actively developing access to only one of these databases and only doing so to detect the undisclosed earnings of recipients once they are placed on SSI’s rolls. Yet, overpayment prevention is equally or more important than overpayment detection because only a small fraction of overpayments that are made are recovered. Field staff could use these databases to prevent overpayments by checking for undisclosed earnings at the time of application. This requires that the agency develop the necessary computer interfaces between SSA field offices and these databases. At the time of our work, the agency had not begun developing these interfaces and did not appear to have any concrete plans to do so. SSA also agrees with our recommendation to study the feasibility of using information from financial institutions to detect undisclosed financial accounts. The agency plans to undertake such a study and issue its first status report no later than September 1998. SSA’s other comments to this report were incorporated where appropriate. The agency’s comments are contained in appendix II. We are sending copies of this report to relevant congressional committees, the Commissioner of Social Security, and other interested parties. If you have any questions about this report, please contact me on (202) 512-7215 or Roland Miller III, Assistant Director, on (202) 512-7246. Other major contributors to this report were Nancy Cosentino, Senior Evaluator, and Jill Yost, Evaluator. Originally, the requester of this work, Congressman E. Clay Shaw, Jr., asked that GAO investigate (1) the type of data that SSA now gets from federal agencies to identify SSI overpayments, (2) whether federal agencies have additional computerized information on the income of SSI clients that SSA is not receiving but would find helpful in reducing overpayments, and (3) whether direct access to the income data maintained by federal agencies is technically and fiscally feasible and would reduce overpayments. The agencies we examined were (1) the IRS, which provides form 1099 information to detect undisclosed financial accounts; (2) the Office of Personnel Management (OPM), which provides information to detect undisclosed federal pensions; (3) the Department of Veterans Affairs (VA), which provides information to detect VA compensation and pensions; and (4) the Department of Defense (DOD), which provides information to detect income from military pensions, military housing, and other incidentals. The report details what we discovered about the manner in which SSA receives IRS form 1099 data and how obtaining these data electronically from the financial industry could prevent or reduce overpayments caused by undisclosed financial accounts. However, we found that the amounts of SSI overpayments that resulted from earnings and assets that clients receive from OPM, VA, and DOD either were not large enough to warrant a detailed study into how SSA could obtain information from these agencies more quickly or were obtained in such a way as to allow the detection of overpayments within 1 to 2 months after the time they were incurred. In the course of our work, we also discovered that a new data source that could prevent or reduce overpayments caused by undisclosed earnings would soon be available. Given that the nondisclosure of earnings and financial accounts, unlike federal benefits, are major sources of SSI overpayments, we asked the requester whether he would like to change the objectives of the study. He responded in the affirmative, stating that he would like us to examine (1) the extent to which overpayments occur because SSI applicants and recipients fail to disclose their earnings and financial accounts, (2) whether SSA could obtain more current and comprehensive information than it does now to detect the nondisclosure of earnings, and (3) whether the agency could also obtain more current and comprehensive information on financial accounts. We interviewed executives from four banks and four financial industry network providers. We also interviewed officials from SSA, OMB, IRS, OCSE, the Federal Reserve, and the Department of the Treasury and obtained relevant documentation. From these interviews, we ascertained the feasibility of using financial industry data to verify bank account information supplied by SSI clients and how it could be done. We also interviewed government officials to determine to what extent the earnings information from the OCSE databases would be more current and comprehensive than the data presently used by SSA to verify earnings information reported by SSI clients. We examined (1) the comparative value of these new data sources versus the data sources currently used by SSA, (2) how SSA currently verifies client-supplied information on earnings and financial accounts, (3) how the new data sources could be most effectively used for verification purposes, and (4) the issues involved in implementing the use of the new data sources. Finally, we obtained nationwide aggregate data from SSA studies on the amount of overpayments that occurred in fiscal year 1996. We used these data to determine the amount of overpayments attributable to the nondisclosure of earnings and financial accounts. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO conducted a follow-up review on the feasibility of the Social Security Administration (SSA) using new data sources on earnings and financial account information to determine applicants' eligibility for the Supplemental Security Income (SSI) program, focusing on: (1) the extent to which overpayments occur because SSI clients fail to disclose their earnings and financial accounts; (2) whether SSA could obtain more current and comprehensive information to detect undisclosed earnings; and (3) whether the agency could obtain more current and comprehensive information on undisclosed financial accounts. GAO noted that: (1) unreported or underreported earnings and financial accounts continue to result in significant overpayments in the SSI program; (2) according to SSA's overpayment data, the failure of SSI clients to disclose earnings and financial accounts was responsible for approximately 40 percent of the $1.6 billion in overpayments identified for fiscal year 1996; (3) specifically, about $379.5 million in overpayments was the result of SSI clients not fully disclosing their earnings, and $268.1 million was the result of clients not disclosing financial account information; (4) more current and comprehensive information is now available to detect undisclosed earnings; (5) SSA detects overpayments resulting from undisclosed earnings primarily by matching information provided by SSI clients with earnings data used in the administration of other government programs; (6) however, computerized matches, which are not done until individuals are on SSI's rolls, have built-in delays in detecting overpayments that range from 6 to 21 months; (7) two databases developed for use by the Office of Child Support Enforcement (OCSE) could provide SSA with more current and comprehensive earnings information; (8) SSA could check these databases prior to placing applicants on the rolls and thereby prevent overpayments caused by applicants failing to disclose earnings at the time of application; (9) these databases would also allow SSA to detect occurrences of undisclosed earnings to ongoing recipients within 4 to 6 months and thereby reduce the number and duration of the corresponding overpayments; (10) opportunities for improved financial account information also exist; (11) SSA detects undisclosed financial accounts by conducting computer matches once a client's eligibility has been established; (12) this match, however, can only detect undisclosed accounts that existed 9 to 21 months before; (13) SSA could obtain up-to-date information on the financial accounts of SSI clients from financial institutions by accessing the nationwide telecommunication network, which links all financial institutions; (14) such information would help ensure that applicants whose bank accounts would make them ineligible for the program do not gain eligibility; and (15) by eliminating ineligible individuals at the point of application, SSA could avoid the expense of determining medical and vocation disability and could also reduce the number and duration of overpayments to ongoing recipients who are overpaid because of newly acquired financial accounts or increases in existing ones.
Currently, FAA authorizes all domestic military; public (academic institutions, federal, state, and local governments including law enforcement organizations); and civil (private sector entities) UAS operations on a limited basis after conducting a case-by-case safety review. Federal, state, and local government agencies must apply for Certificates of Waiver or Authorization (COA), while civil operators must apply for special airworthiness certificates in the experimental category. Because special airworthiness certificates do not allow commercial operations, there is currently no means for FAA to authorize commercial UAS operations. Since FAA started issuing COAs in January 2007, 1,428 COAs have been issued. At present, under COA or special airworthiness certification, UAS operations are permitted for specific time frames (generally 12 to 24 months); locations; and operations. So, one agency can be issued multiple COAs to operate one UAS for the same purpose. In 2012, FAA issued 391 COAs to 121 federal, state, and local government entities across the United States, including law enforcement entities as well as academic institutions (see fig. 2). According to an industry forecast, the market for government and commercial use of UAS is expected to grow, with small UAS having the greatest growth potential. This forecast estimates that the worldwide UAS market could be potentially worth $89 billion over the next decade. The majority of this estimate is for military-type products (primarily the U.S. military) with the associated research and development for production estimated to be $28.5 billion over the next 10 years. As smaller UAS are expected to continue to improve in technology and decrease in price, their prevalence in the national airspace is expected to increase. The forecast also indicates that the United States could account for 62 percent of the world’s research and development investment for UAS technology over the coming decade. Congress has tasked FAA to lead the effort of safely integrating UAS into the national airspace, but several other federal agencies—such as the Department of Defense (DOD), Department of Homeland Security (DHS), and the National Aeronautics and Space Administration (NASA)—also have a role. While DOD uses UAS for training and operational missions, DHS for border patrol, and NASA for scientific research, each agency provides FAA with safety, reliability, and performance data through the COA process. These agencies also participate in UAS integration forums as discussed later in this section. Table 1 provides an overview of key federal UAS stakeholders and their roles in integrating UAS. FAA has established various mechanisms to facilitate collaboration with its partner agencies, and private sector entities to safely integrate UAS (see table 2). For example, given its unique role in managing partnerships among federal agencies for the Next Generation Air Transportation System (NextGen), FAA’s Joint Planning and Development Office (JPDO) was tasked by the Office of Management and Budget to, in conjunction with partner agencies, develop a strategic interagency UAS Research, Development, and Demonstration Roadmap. This roadmap provides a framework for interagency and private sector coordination on UAS research and development efforts. Several working groups have also been formed, such as the UAS Executive Committee, to facilitate collaboration between agencies. FAA has also entered into memorandums of understanding (MOU) with some of these federal agencies. FAA signed MOUs with NASA and DOD regarding research and development and the availability of safety data, respectively. FAA has also involved industry stakeholders and academia through the UAS Aviation Rulemaking Committee and RTCA SC-203. For example, the RTCA SC-203 (a standards-making body) is developing safety, reliability, and performance standards for UAS operations. FAA also has agreements with a range of industry, federal research entities, universities, and international organizations to conduct research. These research and development agreements, known as Cooperative Research and Development Agreements and International Agreements, typically require the agency, organization, or company to perform types of research and provide FAA with the data in exchange for funding. For example, in 2009 FAA established an agreement with the European Union to initiate, coordinate, and prioritize the activities necessary for supporting the development of provisions required for the evolution of UAS to full recognition as a legitimate category-of-airspace user. In addition, FAA partners with federally funded research and development centers on UAS integration efforts. Within FAA, steps have also been taken to increase collaboration and provide the organizational leadership needed to safely accelerate UAS integration. FAA recently created the UAS Integration Office under one executive to provide stable leadership and focus on the FAA UAS integration efforts. The office will coordinate all intra-agency collaboration efforts. At this time, some UAS responsibilities are being handled in other offices throughout FAA. For example, some of the research and development efforts and analysis of operation and safety data are being performed by the Air Traffic Office and the Accident, Investigation, and Prevention Office, respectively. The UAS Integration Office reports directly to the Director of the Flights Standards Service, which provides visibility for the office. At this time, several planning efforts are under way in the office. However, because the reorganization has only recently been implemented, it remains unclear whether the office will provide the support needed to guide a collaborative effort given the complexities of safely integrating UAS into the national airspace. While collaboration mechanisms have been developed to help facilitate UAS integration into the national airspace, continued collaboration among UAS stakeholders will be critical to minimizing duplication of research and addressing implementation obstacles. For example, as we previously reported in our September 2012 report, federal agencies have not yet stepped forward to proactively address the growing concerns regarding the potential security and privacy implications of UAS. We recommended that DOT, DHS, and the Attorney General initiate discussions, prior to the integration of UAS into the national airspace, to explore whether any actions should be taken to guide the collection and use of UAS-acquired data. As we discuss later in this statement, FAA and DOD will need to continue to work together to determine how to leverage DOD’s operational and safety data to help develop UAS operations standards, which is a critical step in the integration process. While we did not evaluate the collaboration mechanisms already in place, stakeholders told us that collaboration was occurring, but efforts could be improved. Specifically, stakeholders told us they would like to see additional leadership from FAA. FAA has several efforts under way to satisfy the 2012 Act’s requirements, most of which must be achieved between May 2012 and December 2015. See table 3 for a list of selected requirements and the status of FAA’s efforts to meet them. FAA has made progress toward these selected requirements. Of the seven deadlines that had passed, however, FAA had completed two as of January 2013. These requirements can be considered under four categories: (1) developing plans for integrating UAS into the national airspace; (2) changing the COA process; (3) integrating UAS at six test ranges; and (4) developing, revising, or finalizing regulations and policies related to UAS. The following provides additional information on the status of FAA’s efforts to meet the requirements under these four categories: Comprehensive plan and roadmap for UAS integration. FAA, with the assistance of JPDO, is developing several planning documents required by the 2012 Act, including a 5-year roadmap and comprehensive plan to outline steps toward safe integration. As of January 2013, FAA officials told us they were in the final stages of reviewing and approving these documents and expected to make them publically available by the February 14, 2013 deadline. In light of the timeframes and complicated tasks involved in achieving the requirements, in September 2012, we recommended that FAA incorporate mechanisms in its 5-year roadmap and comprehensive plan that allow for regular monitoring to assess progress toward safe and routine access of UAS into the national airspace. Incorporating regular monitoring can help FAA understand what has been achieved and what remains to be done and help keep Congress informed about this significant change to the domestic aviation landscape. While FAA concurred with our recommendation, because these documents were not publically available as of January 2013, it remains unclear whether they include mechanisms for monitoring progress. Changes to the COA process. FAA has changed the existing COA process in response to the 2012 Act, including taking steps to expedite COAs for public safety entities and developing agreements with government agencies to expedite the COA or waiver process. To help expedite COAs for public safety entities, FAA extended the length of UAS authorization from a 12-month period to a 24-month period so that those entities receiving COAs do not have to reapply as frequently. In addition, FAA made additional changes to simplify the COA application process, including automating the application process through an online form. FAA also worked with DOJ’s National Institute of Justice to develop an MOU to meet the operational requirements of law enforcement entities, which are expected to be early adopters of small UAS. Officials from both FAA and DOJ have reached agreement on a draft version of the MOU establishing this process. However, this MOU is still under legal review at FAA and DOJ. Test ranges. FAA has taken steps to develop, but has not yet established, a program to integrate UAS at six test ranges, as required by the 2012 Act. As part of these ranges, FAA must safely designate airspace for integrated manned and unmanned flight operations, develop certification standards and air traffic requirements for UAS, ensure the program is coordinated with NextGen, and verify the safety of UAS and related navigation procedures before integrating them into the national airspace. FAA expects data obtained from these test ranges will contribute to the continued development of standards for the safe and routine integration of UAS. In March 2012, FAA issued a Request for Comments in the Federal Register and received a number of comments. FAA officials told us they are still working to meet all of the specified requirements for the test ranges and had expected to issue a Screening Information Request to initiate the competitive bid process for selecting the six test ranges in July 2012. However, because of privacy concerns regarding the collection and use of UAS-acquired data expressed by commenters, the internal review process of the Screening Information Request was delayed. FAA officials said they hired a privacy expert to help develop a strategy to address these concerns and are working to incorporate this strategy in its Screening Information Request. As of January 2013, officials noted that FAA expects to release the Screening Information Request in the next 4 to 6 weeks. Rulemaking. While FAA has efforts under way supporting a rulemaking for small UAS, as required by the 2012 Act, it is uncertain whether FAA will meet the August 2014 deadline. In fact, the agency’s rulemaking efforts for UAS date back more than 5 years, when it established the small UAS Aviation Rulemaking Committee in 2008. In August 2011, FAA initially provided the Secretary of Transportation with its draft Notice of Proposed Rulemaking (NPRM). FAA officials told us in January 2013 that the FAA is still internally reviewing the draft and working to agree on the NPRM’s language. According to the officials, FAA has not determined when it might issue the NPRM. As we reported in 2012, many entities have research and development efforts under way to mitigate obstacles before UAS are allowed to operate safely and routinely in the national airspace. Some of these obstacles and related research include vulnerabilities in UAS operations, such as sense and avoid; command, control, and communications, including lost link, dedicated radio-frequency spectrum, and Global Positioning System (GPS) jamming and spoofing; and human factors. However, these research and development efforts cannot be completed and validated without safety, reliability, and performance standards, which have not yet been developed because of data limitations. To date, no suitable technology has been deployed that would provide UAS with the capability to sense and avoid other aircraft and airborne objects and to comply completely with FAA regulatory requirements of the national airspace. However, research and development efforts by FAA, DOD, NASA, and MITRE, among others, suggests that potential solutions to the sense and avoid obstacle may be available in the near term. Since 2008, FAA and other federal agencies have managed several research activities to support meeting the sense and avoid requirements. DOD officials told us that the Department of the Army is working on a ground-based sense and avoid system that will detect other airborne objects and allow the pilot to direct the UAS to maneuver to a safe location. The Army has successfully tested one such system, but it may not be useable on all types of UAS. Another potential system to address this obstacle is an airborne sense and avoid system, which could equip UAS with the same GPS-based transponder system that will be used in FAA’s NextGen air-traffic-management system and with which some manned aircraft are starting to be equipped. In 2012, NASA researchers at Dryden Flight Research Center successfully tested an automatic dependent surveillance-broadcast (ADS-B) transponder system on its Ikhana UAS. An airborne sense and avoid system could include ADS- B, along with other sensors such as optical/infrared cameras and radar. Ensuring uninterrupted command and control for both small and large UAS remains a key obstacle for safe and routine integration into the national airspace. Since UAS fly based on pre-programmed flight paths and by commands from a pilot-operated ground control station, the ability to maintain the integrity of command and control signals are critically important to ensure that the UAS operates as expected and as intended. In a “lost link” scenario, the command and control link between the UAS and the ground control station is broken because of either environmental or technological issues, which could lead to loss of control of the UAS. To address this type of situation, UAS generally have pre-programmed maneuvers that may direct the UAS to hover or circle in the airspace for a certain period of time to reestablish its radio link. If the link is not reestablished, then the UAS will return to “home” or the location from which it was launched, or execute an intentional flight termination at its current location. It is important that air traffic controllers know where and how all aircraft are operating so they can ensure the safe separation of aircraft in their airspace. FAA and MITRE have been measuring the impacts of lost link on national airspace safety and efficiency, but the standardization of lost link procedures, for both small and large UAS, has not been finalized. Currently, according to FAA, each COA has a specific lost link procedure unique to that particular operation and air traffic controllers should have a copy for reference at all times. Until procedures for a lost link scenario have been standardized across all types of UAS, air traffic controllers must rely on the lost link procedures established in each COA to know what a particular UAS will do in such a scenario. Progress has been made in obtaining additional dedicated radio- frequency spectrum for UAS operations, but additional dedicated spectrum, including satellite spectrum, is still needed to ensure secure and continuous communications for both small and large UAS operations. The lack of protected radio-frequency spectrum for UAS operations heightens the possibility that a pilot could lose command and control of a UAS. Unlike manned aircraft—which use dedicated, protected radio frequencies—UAS currently use unprotected radio spectrum and, like any other wireless technology, remain vulnerable to unintentional or intentional interference. This remains a key security and safety vulnerability because, in contrast to a manned aircraft in which the pilot has direct physical control of the aircraft, interruption of radio transmissions can sever the UAS’s only means of control. UAS stakeholders are working to develop and validate hardware and standards for communications operating in allocated spectrum. For example, FAA’s UAS Research Management Plan identified 13 activities designed to mitigate command, control, and communication obstacles. One effort focused on characterizing the capacity and performance impact of UAS operations on air-traffic-control communications systems. In addition, according to NASA, it is developing, in conjunction with Rockwell Collins, a prototype radio for control and a non-payload communications data link that would provide secure communications. The jamming of the GPS signal being transmitted to the UAS could also interrupt the command and control of UAS operations. In a GPS jamming scenario, the UAS could potentially lose its ability to determine its location, altitude, and the direction in which it is traveling. Low cost devices that jam GPS signals are prevalent. According to one industry expert, GPS jamming would become a larger problem if GPS is the only method for navigating a UAS. This problem can be mitigated by having a second or redundant navigation system onboard the UAS that is not reliant on GPS, which is the case with larger UAS typically operated by DOD and DHS. Encrypting civil GPS signals could make it more difficult to “spoof” or counterfeit a GPS signal that could interfere with the navigation of a UAS. Non-military GPS signals, unlike military GPS signals, are not encrypted and transparency and predictability make them vulnerable to being counterfeited, or spoofed. In a GPS-spoofing scenario, the GPS signal going from the ground control station to the UAS is first counterfeited and then overpowered. Once the authentic (original) GPS signal is overpowered, the UAS is partially under the control of the “spoofer.” This type of scenario was recently demonstrated by researchers at the University of Texas at Austin at the behest of DHS. During the demonstration at the White Sands Missile Range, researchers spoofed one element of the unencrypted GPS signal of a fairly sophisticated small UAS (mini-helicopter) and induced it to plummet toward the desert floor. The research team found that it was straightforward to mount an intermediate-level spoofing attack, such as controlling the altitude of the UAS, but difficult and expensive to mount a more sophisticated attack. The research team recommended that spoof-resistant navigation systems be required on UAS exceeding 18 pounds. UAS stakeholders have been working to develop solutions to human factor issues for both small and large UAS. According to FAA, human factors research examines the interaction between people, machines, and the environment to improve performance and reduce errors. Human factors are important for UAS operations as the pilot and aircraft are not collocated. The separation of pilot and aircraft creates a number of issues, including loss of sensory cues valuable for flight control, delays in control and communications loops, and difficulty in scanning the visual environment surrounding the unmanned aircraft. As part of its UAS Integration in the National Airspace System Project, NASA is working to develop human factor guidelines for ground control stations and plans to share the results with RTCA SC-203 to inform recommended guidelines. In addition, the Department of the Army is working to develop universal ground control stations, which would allow UAS pilots to fly different types of UAS without having to be trained on multiple configurations of a ground control station. The development of standards for UAS operations is a key step in the process of safe integration and supporting research and development efforts. Setting standards, certification criteria, and procedures for sense and avoid systems as well as protocols to be used for the certification of command, control, and communication systems will guide research and development efforts toward a specifically defined goal. Once the standards are developed, FAA will use the standards in UAS regulations. Currently, UAS continue to operate as exceptions to the regulatory framework rather than being governed by it. Without specific and permanent regulations for safe operation of UAS, federal stakeholders, including DOD and NASA, continue to face challenges and limitations on their UAS operations. The lack of final regulations could hinder the acceleration of safe and routine integration of UAS into the national airspace. Standards-making bodies are currently developing safety, reliability, and operational standards. While progress has been made, the standards development process has been hindered, in part, because of FAA’s inability to use safety, reliability, and performance data from DOD, the need for additional data from other sources, as well as the complexities of UAS issues in general. As we previously reported, while DOD provided FAA with 7 years of data in September 2011, FAA officials told us they have been unable to use this data to develop standards because of differences in definitions and uncertainty about how to analyze these data. To mitigate these challenges FAA has been working with DOD to develop an MOU and better identify what data are needed. Finally, FAA is also working with MITRE to develop a data collection tool that will allow officials to better analyze the data they receive from DOD. The establishment of six test ranges, as previously discussed, and the designation of permanent areas of operation in the Arctic could provide FAA with two potential new sources of safety, reliability, and performance data for UAS. However, it is unclear when the test ranges and Arctic area will be operational. Use of these data will be important in developing safety, reliability, and performance standards, which are needed to guide and validate the supporting research and development efforts. According to an RTCA official, both DOD and NASA are sharing the results of their UAS flight experience and research and development efforts to assist RTCA in the standards development process. The RTCA official suggested that the standards-making process might be accelerated if it could start by producing an initial set of standards for a specific UAS with a clearly defined mission. The committee could then utilize those initial standards, along with the subsequent safety and performance data from those operations, to develop additional standards for increasingly complex UAS functions and missions. FAA and NASA are taking steps to ensure the reliability of both small and large UAS by developing a certification process specific to UAS. Currently, FAA has a process and regulations in place for certifying any new manned aircraft type and allowing it access to the national airspace. FAA’s Research and Development office is working to identify the substantive differences in how to meet the certification standards for manned and unmanned aircraft. According to its 2012 Research Management Plan, the office has six activities under way that support the development of UAS-specific certification and airworthiness standards. In closing, UAS integration is an undertaking of significant breadth and complexity that touches several federal agencies. Congress has highlighted the importance of UAS integration by establishing statutory requirements and setting deadlines for FAA. FAA, as the lead agency, faces the daunting task of ensuring that all of the various efforts within its own agency, as well as across agencies and other entities, will align and converge in a timely fashion to achieve UAS integration within these deadlines. Because of concerns about the agency’s ability to meet deadline requirements, we recommended that FAA incorporate regular monitoring of its efforts to assess progress toward fulfilling its requirements outlined in the 2012 Act. Incorporating regular monitoring will help to inform stakeholders and Congress about what has been achieved and what remains to be done and help FAA build stakeholder confidence in its ability to achieve UAS integration in a safe and timely manner. In addition, the various entities’ research and development efforts require continued collaboration to address the critical issues that need to be resolved before UAS are allowed to operate safely and routinely in the national airspace. This collaboration will be important to help align research and development goals across federal agencies and minimize duplication of research or inefficient use of resources. Chairman Broun, Ranking Member Maffei, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions at this time. For further information on this testimony, please contact Gerald L. Dillingham, Ph.D., at (202) 512-2834 or dillinghamg@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include H. Brandon Haller, Assistant Director; Heather Krause, Assistant Director; Cheryl Andrew; Colin Fallon; Rebecca Gambler; Geoffrey Hamilton; Daniel Hoy; Brian Lepore; Sara Ann Moessbauer; Faye Morrison; Jeffrey Phillips; Nalylee Padilla; and Melissa Swearingen. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Unmanned aircraft systems are aircraft and associated equipment that do not carry a pilot aboard, but instead operate on pre-programmed routes or are manually controlled by pilot-operated ground stations. Although current domestic uses of UAS are limited to activities such as law enforcement, forensic photography, border security, and scientific data collection, UAS also have a wide range of other potential commercial uses. According to an industry forecast, the market for UAS is expected to grow and could be potentially worth $89 billion over the next decade. Concerned with the pace of UAS integration into the national airspace, Congress established specific requirements and set deadlines for FAA in the 2012 FAA Modernization and Reform Act (the 2012 Act). This testimony discusses 1) the roles and responsibilities of and coordination among federal agencies and other UAS stakeholders involved in integrating UAS, 2) FAA’s progress in complying with the 2012 Act’s UAS requirements, and 3) research and development efforts by FAA and other entities to address challenges for safely integrating UAS. This testimony is based on a 2012 GAO report. In past work, GAO analyzed FAA’s efforts to integrate UAS into the national airspace, the role of other federal agencies in achieving safe and routine integration, and research and development issues. GAO also conducted selected interviews with officials from FAA and other federal agencies, industry, and academic stakeholders. While Congress has tasked FAA to lead the effort of safely integrating unmanned aircraft systems (UAS) in the national airspace, several federal and other entities also have a role. FAA has established various mechanisms to facilitate collaboration with these entities. For example, FAA has entered into formal agreements with the Department of Defense (DOD) and the National Aeronautics and Space Administration (NASA) on obtaining appropriate safety data and coordinating research and development, respectively. FAA has also involved industry stakeholders and academia in the development of standards and research for UAS operations. FAA recently created the UAS Integration Office, within FAA, to coordinate all intra-agency UAS efforts and provide organizational leadership. Continued collaboration among UAS stakeholders will be critical to minimizing duplication of research and addressing implementation obstacles. While FAA has made progress toward meeting the 2012 Act's requirements, as of January 2013, it has missed several of its deadlines. FAA continues to face challenges, with many of its efforts still in process. For example, the establishment of six test ranges for UAS operations, as required by the 2012 Act, is being delayed due to privacy concerns. Meeting the 2012 Act's requirements moving forward will require continued collaboration and significant work for FAA. In September 2012, GAO recommended that FAA incorporate mechanisms in its planning that allow for regular monitoring to assess its progress. Such mechanisms can help FAA identify what has been achieved and what remains to be done. Research and development efforts are under way to mitigate obstacles to safe and routine integration of UAS into the national airspace. However, these research and development efforts cannot be completed and validated without safety, reliability, and performance standards, which have not yet been developed because of data limitations. GAO previously reported that FAA has not utilized the operational data it already possesses, such as data provided by the DOD.
In the 1980s NWS began a nationwide modernization program to upgrade observing systems such as satellites and radars, and design and develop advanced computer workstations for forecasters. The goals of the modernization are to achieve more uniform weather services across the nation, improve forecasting, provide better detection and prediction of severe weather and flooding, permit more cost-effective operations through staff and office reductions, and achieve higher productivity. For example, NWS plans to reorganize its field office structure from 256 offices (52 Weather Service Forecast Offices and 204 Weather Service Offices), to 121. As of February 1999, NWS officials told us that 132 offices have been closed. NWS’ system modernization includes four major systems development programs, which are expected to collectively cost about $4.5 billion. I would like to briefly describe each. Next Generation Weather Radar (NEXRAD). This is a program to acquire 166 Doppler radars. Largely deployed, these radars have helped NWS increase the accuracy and timeliness of warnings for severe thunderstorms, tornadoes, and other hazardous weather events. The reported cost of this program is just under $1.5 billion. Next Generation Geostationary Operational Environmental Satellite (GOES-Next). This is a program to acquire, launch, and control five geostationary satellites, GOES-I through GOES-M, which assist in the mission of identifying and tracking severe weather events, such as hurricanes. The first satellite in the current series was launched in 1994 and the fifth is scheduled for launch in 2002. The total cost for these five satellites, including launch services and ground systems, is estimated to be just under $2 billion. Automated Surface Observing System (ASOS). This is a program to automate and enhance methods for collecting, processing, displaying, and transmitting surface weather conditions, such as temperature and precipitation. The system is planned for installation at 314 NWS locations. Estimated costs for the ASOS Program are about $350 million, which includes the NWS units and another 679 units for the Federal Aviation Administration and the Department of Defense. Advanced Weather Interactive Processing System (AWIPS). This program integrates, for the first time, satellite, radar, and other data to support weather forecaster decision-making and communications; it is the linchpin of the NWS modernization. AWIPS, which was originally scheduled to be developed incrementally in a series of six modules, or builds, is currently set to be deployed to 152 locations after the fourth build by the end of June 1999. In 1995 we designated the NWS modernization a high-risk area for the federal government because of its estimated $4.5 billion cost, its complexity, its criticality to NWS’ mission of helping to protect life and property through early forecasting and warnings of potentially dangerous weather, and its past problems--documented in several of our reports. Our 1997 high-risk series reported that although the development and deployment of the observing systems associated with the modernization were nearing completion, unresolved issues remained. These concerned the systems' operational effectiveness and efficient maintenance. For example, new radars were not always up and running when severe weather threatened, and ground-based sensors fell short of user expectations, particularly during active weather. We recommended that NWS correct shortfalls in radar performance, and define and prioritize all ground-based sensor corrections according to user needs. Some of our radar and ground-based sensor performance concerns were addressed, while others remain. We recently reported that a NEXRAD unit in southern California failed to consistently meet NWS’ own NEXRAD availability requirement, and recommended that the Weather Service correct the problem such that the radar meets availability requirements. NWS agreed, and has several activities planned to bring about such improvement. While there have been specific performance problems, NWS reports that the new radars and satellites overall have enabled it to generate better data and greatly improved forecasts and warnings. We continue to view the NWS modernization as a high-risk area, however, for two primary reasons: (1) NWS lacks an overall architecture to guide systems development and (2) the final piece of the modernization--AWIPS (the forecaster workstations that will integrate weather data from NEXRAD, GOES-Next, and ASOS)--has not yet been deployed. At this point I would like to discuss these issues in more detail. A systems architecture is an essential tool for guiding effective and efficient systems development and evolution. We initially reported in 1994 that the NWS modernization needed such an overall technical blueprint; NWS agrees--and is currently working on one. Until such an architecture is developed and enforced, the modernization will continue to be subject to higher costs and reduced performance. This is an important point as component systems continue to evolve to meet additional demands and take advantage of improved technology. The Assistant Administrator for Weather Services shares this view, and said recently that NWS plans to intensify its efforts to develop a systems architecture. Until AWIPS is fully deployed and functioning properly, NWS will not be able to take full advantage of the $4.5 billion total investment it has made in the modernization. Over the past several years, we have reported that AWIPS has encountered delays and cost increases due to design problems and management shortcomings and have made several recommendations to improve management of this critical component of the modernization. NWS has acted on most of our recommendations. I would like to now update you on AWIPS’ cost, schedule, software development, and maintenance. The cost to develop AWIPS was estimated at $350 million in 1985; a decade later, that figure had risen to $525 million. However, in testimony and a report issued in 1996, we pointed out the inaccuracy of this $525 million estimate due to the omission of several cost factors, including known contract increases. The Department of Commerce later committed to a $550 million funding cap. Yet as we testified in April 1997, it would prove extremely difficult for NOAA to develop and deploy AWIPS within the $550 million cap if any problems were encountered. Given the size and complexity of the development--and recognizing that even managed risks can turn into real problems--we testified that such problems were likely to occur and that costs would likely exceed $550 million. In accordance with a recommendation we made in 1996, the department contracted for an independent cost estimate of AWIPS because of the uncertainty about whether it could be delivered within the $550 million cap given the increased software development expenses. According to the assessment dated February 2, 1998, the likely cost to complete AWIPS through its final build--build 6--was $618 million. In March 1998, we reported that although AWIPS was planned for full deployment through build 6 in 1999--at 152 locations nationwide--that schedule is now in doubt. The latest schedule calls only for build 4-- actually build 4.2--to be completed in June, within the $550 million cap. Also as we testified last year, completion dates for builds 5 and 6 were uncertain because NWS wanted to ensure that requirements for those modules were not extraneous to mission needs, in order to minimize future cost increases. This reflects a recommendation we made in 1996 for all AWIPS builds. In August 1998, an independent review team reported that build 5 requirements are essential to NWS’ core mission and that the cost to complete should range from an additional $20 to $25 million above the $550 million cap. The team concluded that build 6 requirements should not be pursued, however, because they “resemble capabilities desired, rather than requirements.” According to the AWIPS program manager, deployment of build 4.2 will result in improved forecasts and warnings, a reduction of 106 staff, and the decommissioning of the current Automation of Field Operations and Services (AFOS) system. The program manager added that build 5 will be pursued in order to realize expected further improvements in weather forecasts and warnings, a reduction of an additional 69 staff, and the decommissioning of the NEXRAD workstations. Schedules for build 5 have not yet been developed. To help ensure that build 4.2 will be delivered within the cap, the Assistant Administrator for Weather Services has contracted with an independent accounting firm to verify program expenditures. The most critical risk factors underlying questions about AWIPS' future relate to software development. We have frequently reported on this and made several recommendations to improve AWIPS’ software development processes. Software quality is governed largely by the quality of the processes used to develop it; however, NWS’ efforts to develop AWIPS software have lacked defined development processes. Such processes are all the more essential because of NWS’ increased use of software code developed internally at NOAA’s Forecast Systems Laboratory (FSL) in Boulder, Colorado--a research and development facility that primarily develops prototype systems. This software code has not been developed according to the rigorous processes commonly used to develop production- quality code. Failure to adhere to these processes may result in unstable software that will continue to cause cost increases and schedule delays. The cost assessment delivered in February 1998 also found risk inherent in the development of builds 4 through 6 because of the transitioning of FSL- developed software to AWIPS and the uncertainty surrounding requirements for these builds. NWS officials have acknowledged these software development process weaknesses, and have told us that they continue to strengthen these processes. For example, NWS reports that all AWIPS software, both that developed by the government and the contractor, is being controlled under a common configuration management process. Another risk area concerns the network control facility, which provides the ability to monitor and maintain AWIPS sites across the country from a single location. As we testified last year, through build 3, AWIPS was still experiencing difficulty with the central location's ability to detect and respond to problems. We further testified that since these problems concerned only a limited number of sites that as more sites come on line, problems can be expected to increase. NWS officials have acknowledged that the poor performance of the network control facility continues to be a prime concern, have sought the advice of external consultants, and have initiated a number of actions to improve performance of this facility. Finally, a critical risk area is whether the AWIPS builds--and, indeed, all modernization components--will be Year 2000 compliant. AWIPS to date is not Year 2000 compliant. Build 4.2--set for completion this June--is intended to make all AWIPS applications Year 2000 compliant. In the event it is late, NWS has renovated its current system, Automation of Field Operations and Services, to be ready as a potential backup. Yet even if Year 2000 compliance ceases to be an issue with build 4.2, NWS’ companion modernization systems will need to be compliant because of the amount of data they exchange. NWS reports that five of the six mission-critical systems that interface with AWIPS are already Year 2000 compliant, on the basis of individual systems tests. The remaining system is scheduled to be compliant by March 31, 1999, according to the Department of Commerce’s February 1999 Quarterly Year 2000 Progress Report to the Office of Management and Budget. To ensure that these mission-critical systems can reliably exchange data with other systems and that they are protected from errors that can be introduced by external systems, NWS has begun to perform end-to-end testing. These tests include multiple Weather Service systems working together and critical interfaces with the Department of Defense and the Federal Aviation Administration. NWS plans to continue to conduct this end-to-end testing through March of this year. The final report on the results of these tests is scheduled to be issued this May. We suggest that NWS consider conducting additional end-to-end testing after the final version of AWIPS is delivered, which is currently scheduled for this June. Currently, NWS is using a prior version of AWIPS in its end-to- end testing—a version that continues to be modified as AWIPS’ system- level testing progresses. Testing with the final version of AWIPS will help to ensure that the production system that will be running in the year 2000 will work with its interrelated systems. To reduce the risk and potential impact of Year 2000-induced information systems failures on the Weather Service’s core business processes, it is critical that NWS have contingency plans in place that will help ensure continuity of operations through the turn of the century. Without such plans, NWS will not have well-defined processes to follow in the event of failures. NWS depends on data provided by other federal agencies as well as on services provided by the public infrastructure (e.g., power, water, voice and data telecommunications). One weak link anywhere in this chain of critical dependencies could cause major disruption to NWS operations. Given these interdependencies, it is imperative that contingency plans be developed for all critical core business processes. According to NWS’ Year 2000 program manager, the Weather Service has begun drafting contingency plans for three core business processes: those that (1) observe weather data, (2) produce forecasts and warnings, and (3) disseminate data. It is essential that NWS develop these business continuity and contingency plans expeditiously, and test these plans to ensure that they are capable of providing the level of support needed to allow continued functioning of NWS’ core business processes in the event of failure. As noted in our business continuity and contingency guide, another key element of such a plan is the development of a zero day or day one risk reduction strategy and, more generally, procedures for the period between December 1999 and early January 2000. Key aspects of this strategy can include the implementation of (1) an integrated control center, whose purposes include the internal dissemination of critical data and problem management and (2) a timeline that details the hours in which certain events will occur (such as when backup generators will be started) during the late December and early January rollover period. To date, NWS has no such strategy. We suggest that the development of such a risk reduction strategy be undertaken. In conclusion, NWS has made progress on the development and operational testing of the forecaster workstations and its Year 2000 testing and contingency planning. However, cost, schedule, and technical risks associated with the workstations continue to be concerns. Further, the results of NWS’ Year 2000 end-to-end testing and business continuity and contingency plans are expected to be delivered soon. NOAA has an aging in-house fleet of 15 ships that are used to support its programs in fisheries research, oceanographic research, and hydrographic charting and mapping. Most of NOAA’s ships are past their 30-year life expectancies and many of them are costly and inefficient to operate and maintain and lack the latest state-of-the-art technology. NOAA’s ships are managed and operated by a NOAA Corps of about 240 uniformed service commissioned officers who, like the Public Health Service Corps, perform civilian rather than military functions but are covered by a military-like pay and benefits system. For more than a decade, congressional committees, public and private sector advisory groups, the National Performance Review, the Commerce Office of Inspector General (OIG), and our office have urged NOAA to aggressively pursue cost-effective alternatives to its in-house fleet of ships. We have also reported and testified on issues relating to NOAA’s Commissioned Corps that manages and operates the in-house fleet of ships. We reported on NOAA’s fleet operations and fleet modernization needs in 1986 and again in 1994 and summarized our earlier work, the Commerce OIG’s work, and the Department of Commerce’s corrective actions in summary reports in January 1998 and January 1999. As part of our recent special performance and accountability series of reports, we identified the NOAA fleet as one of four major performance and management issues confronting the Department of Commerce. As early as 1986 we reported that NOAA needed to develop more definitive information on private ships’ availability, capability, and costs before taking any action to deactivate NOAA’s ships. In 1994, we reported that NOAA (1) lacked the financial and operational data it needed to adequately assess whether chartered and contracted ships could cost effectively meet the needs of its programs and (2) had no assurance that its fleet modernization plan represented the most cost effective means of meeting future program requirements. Consequently, we recommended that NOAA take several actions to ensure that all viable and cost-effective options for accomplishing its program missions are considered in making decisions on future fleet modernization. The Commerce OIG has also reported and testified several times on the NOAA fleet modernization issue, identified the fleet as one of the top 10 management problems facing the Department of Commerce in April 1997, January 1998, and again in December 1998, and continues to believe that NOAA could and should be doing more to pursue cost-effective alternatives to its in-house fleet of ships for acquiring marine data. Following reports by us, the Commerce OIG, and others, the Department of Commerce initially identified the NOAA fleet as a material weakness in its annual Federal Managers’ Financial Integrity Act (FMFIA) report for fiscal year 1990. It remains a material weakness today. Since 1990, NOAA has developed several fleet replacement and modernization plans that call for investments of hundreds of millions of dollars to upgrade or replace these ships, and each has been criticized by the Commerce OIG for not pursuing alternative approaches strongly enough. For example, in a 1996 program evaluation report on NOAA’s $1 billion 1995 fleet modernization plan, the OIG recommended that NOAA terminate its fleet modernization efforts; cease investing in its ships; immediately begin to decommission, sell, or transfer them; and contract for the required data or ship services. In response to these criticisms, NOAA now says that it has taken steps to improve the cost efficiency of its fleet and significantly increased its outsourcing for these services from about 15 percent in 1990 to over 40 percent today. According to NOAA, for example, it has removed seven ships from service and brought one new and two converted Navy ships into service since 1990, now outsources for about 46 percent of its research and survey needs, and expects to further increase its use of outsourcing to about 50 percent over the next 10 years. Although NOAA apparently has made progress in reducing the costs of its fleet and outsourcing for more of its research and data needs, NOAA continues to rely heavily on its in-house fleet and still plans to replace or upgrade some of these ships. In this regard, the President’s budget for fiscal year 2000 proposes $52 million for construction of a new fisheries research ship and indicates that NOAA plans to spend a total of $185 million for four new replacement ships over the 5-year period ending in fiscal year 2004--$52 million in 2000, $51 million in 2001, $40 million in 2002, $40 million in 2003, and $2 million in 2004. We have not had an opportunity to review the latest studies of NOAA’s fleet modernization efforts or NOAA’s acquisition plan for its fisheries research mission. Thus, we do not know whether or not NOAA’s proposed replacement ships are the most cost- effective alternative currently available for meeting these fisheries research needs. In addition to its proposed acquisitions, NOAA also continues to repair and upgrade its aging fleet of existing ships. Since 1990, it has repaired and upgraded seven of its existing ships and plans to repair and upgrade two more in 1999. According to the President’s recent budget requests, NOAA spent $12 million in 1996 and $13 million in 1997 to modernize, convert, and replace its existing ships. Also, it spent $21 million on fleet maintenance and planning in 1998 and expects to spend $13 million in 1999 and $9 million in 2000. The question of the viability of the NOAA fleet is entwined with the issue of the NOAA Corps, which operates the fleet. In 1995, the National Performance Review, noting that the NOAA Corps was the smallest uniformed service and that the fleet it commanded was obsolete, recommended that the NOAA Corps be gradually reduced in numbers and eventually eliminated. We reported in October 1996 that the NOAA Corps generally does not meet the criteria and principles cited by the Department of Defense for a military compensation system. We also noted that other agencies, such as the Navy, the Environmental Protection Agency (EPA), and the Federal Emergency Management Agency (FEMA), use federal civilian employees or contractors to carry out duties similar to the functions that NOAA assigns to the Corps. Commerce developed a plan and legislative proposal to “disestablish” or civilianize the NOAA Corps in 1997, but the Congress did not adopt this proposal. According to NOAA and to the Department of Commerce’s annual performance plans for fiscal years 1999 and 2000 under the Results Act, the NOAA Corps has been downsized from over 400 officers in fiscal year 1994 to about 240 at the beginning of fiscal year 1999, achieving gross annual cost savings of at least $6 million. In June 1998, NOAA announced a new restructuring plan for the NOAA Corps. NOAA’s plan focused on the need for a NOAA Commissioned Corps of about 240 officers. NOAA’s June 1998 restructuring plan also called for a new civilian director of the NOAA Corps and a new recruiting program. However, the Congress had other ideas. The Omnibus Appropriations Act for fiscal year 1999 set the number of NOAA Corps officers at 250. Subsequently, the Governing International Fishery Agreement Act (Public Law 105-384, approved November 13, 1998) made other changes in NOAA’s proposed restructuring plan. This act authorized a NOAA Corps of at least 264 but not more than 299 commissioned officers for fiscal years 1999 through 2003, requires that a uniformed flag officer be the NOAA Corps’ operational chief, and directed the Secretary of Commerce to lift the then- existing recruiting freeze on NOAA Corps officers. According to the NOAA Corps, it expects to have about 250 commissioned officers by the end of fiscal year 1999. In summary, NWS faces significant challenges this year—both in deploying the initial version of AWIPS and in addressing the Year 2000 problem. Longer term, NWS still needs to develop an overall systems architecture and to develop AWIPS’ build 5 requirements since they are essential to NWS’ core mission. In the NOAA fleet area, continuing congressional oversight of NOAA’s budget requests for replacement or upgraded ships is needed to ensure that NOAA is pursuing the most cost-effective alternatives for acquiring marine data. This concludes our statement. We would be happy to respond to any questions that you or other members of the Subcommittee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the: (1) status of the National Weather Service (NWS) systems modernization; and (2) most cost-effective alternatives for acquiring the National Oceanic and Atmospheric Administration's (NOAA) marine data. GAO noted that: (1) although NWS is nearing completion of its systems modernization effort, two significant challenges face it this year: (a) deploying the final system of modernization; and (b) ensuring that all of its mission-critical systems are year 2000 compliant; (2) NWS has made progress on both fronts; (3) in the NOAA fleet area, NOAA now outsources for more of its research and data needs but plans to spend $185 million over the next five years to acquire four new replacement NOAA fisheries research ships; and (4) thus, GAO believes that continued congressional oversight of this area, as well as NOAA's budget requests for replacement or upgraded ships, is needed to ensure that NOAA is pursuing the most cost-effective alternatives for acquiring marine data.
Physician practices that charge membership or retainer fees and provide enhanced services or amenities are referred to as concierge care or retainer-based medicine. The origins of this practice approach are often traced to a medical practice founded in Seattle, Washington, in 1996. Physicians in this practice provide comprehensive primary care to no more than 100 patients each and currently charge annual retainer fees of $13,000 for individuals. These physicians do not bill any form of patient health insurance. As more physicians have begun concierge practices, concierge care has become more diverse, comprising physicians who bill patient insurance, charge lower membership fees, and see more patients than the original Seattle practice. The American Medical Association (AMA) has described concierge care as one of many options that patients and physicians are free to pursue. AMA in 2003 adopted ethics guidelines for physicians who have concierge care contracts—which AMA calls retainer contracts—with their patients. These guidelines specify, for example, that physicians should facilitate the transition to new physicians for patients who choose not to join their concierge practices and that they must observe relevant laws, rules, and contracts. The Medicare program was established by title XVIII of the Social Security Act, which governs how physicians bill for services that the program covers. Limits on what physicians may charge their Medicare patients depend on (1) the relationship between the physician and the Medicare program and (2) the type of service provided. Physicians who provide services to Medicare beneficiaries may choose one of three ways to relate to the program: participating, nonparticipating, or opted out. Participating: Participating physicians agree to accept Medicare’s fee schedule amount as payment in full for all covered services they provide to beneficiaries. In accordance with the Medicare participation agreement, these physicians receive reimbursement directly from the Medicare program and agree to charge beneficiaries only for any applicable deductible or coinsurance. More than 90 percent of the physicians and others who billed Medicare agreed to participate in Medicare in 2004. Nonparticipating: Nonparticipating physicians do not agree to accept the Medicare fee schedule amount paid to participating physicians as payment in full for all covered services they provide to beneficiaries. They are still subject to limits on what they may charge, however, and those limits depend on whether they seek reimbursement directly from Medicare. When a nonparticipating physician files a claim to be reimbursed directly from Medicare, he or she must accept the Medicare fee schedule amount for nonparticipating physicians, which is 95 percent of the fee schedule amount for participating physicians, as payment in full and may charge the beneficiary only for any applicable Medicare coinsurance or deductible. When a nonparticipating physician does not request reimbursement directly from Medicare, he or she may charge the Medicare beneficiary up to 115 percent of the fee schedule amount for nonparticipating physicians. Opted-out: Physicians who opt out of Medicare are not subject to any limits on what they may charge their Medicare beneficiary patients, even for services that Medicare would otherwise cover. Physicians who opt out of Medicare must agree not to submit for 2 years any claims for reimbursement for any of the services they provide to Medicare beneficiaries. Contracts between opted-out physicians and their beneficiary patients allow them to make their own financial arrangements for services that would otherwise be covered by Medicare, effectively taking those services outside the program. These contracts must be in writing and they must clearly state that the beneficiary also agrees not to submit claims to Medicare and assumes financial responsibility for all services provided by that physician. In addition to a physician’s Medicare participation status, the type of service provided also determines whether limits apply to physician charges. Physicians and beneficiaries are free to make private financial arrangements for the provision of services that Medicare does not cover. General standard for Medicare coverage: Medicare law states that, to be covered, services must be reasonable and necessary for the diagnosis or treatment of illness or injury or to improve the functioning of a malformed body member. The scope of coverage and the exact type of service that may be reimbursed depend on the circumstances of each case. This medical necessity standard can result in situations where the same service—for example, a comprehensive office visit—is considered medically necessary and reimbursable by Medicare in some circumstances but not others. Specific inclusion in Medicare coverage: Medicare law also establishes coverage for certain specific services. For example, Medicare covers an initial preventive physical examination for beneficiaries who become eligible for Medicare on or after January 1, 2005. Other examples of specific preventive benefits established by statute include immunizations against pneumonia, hepatitis B, and influenza and periodic screening tests for early detection of certain cancers. Specific exclusion from Medicare coverage: Medicare law specifically excludes certain items or services—for example, personal comfort items, purely cosmetic surgery, hearing aids, and routine physical checkups except for the initial preventive examination for newly eligible beneficiaries. Table 1 summarizes the limits on physician charges depending on their Medicare participation status and the type of service provided. Physicians who impose charges on beneficiaries beyond the Medicare limits may be subject to civil monetary penalties. The Secretary of HHS has delegated enforcement of Medicare limits to two different entities within HHS. CMS, which administers the Medicare program, has enforcement authority over the limits that apply to nonparticipating physicians. HHS OIG has enforcement authority over participating physicians’ compliance with the terms of the participation agreement. The Medicare law’s limits on physician charges protect beneficiaries from additional charges for services they are entitled to receive under Medicare. The law does not, however, provide that a beneficiary has the right to receive services from any particular physician. Physicians are free to choose how they will interact with the Medicare program. They may decide to close their practices to new Medicare patients or decline to treat any Medicare beneficiaries at all. Concierge care is practiced by a small number of physicians, located primarily in urban areas on the East and West Coasts. Although nearly all of the concierge physicians who responded to our survey reported practicing primary care, they differed in many of the characteristics of practice design, including the annual membership fee charged, number of patients treated, features offered, whether they billed health insurance, and their relationship to the Medicare program. Concierge physicians are few in number and located primarily in urban areas on the East and West Coasts. Since the first Seattle practice was founded in the mid-1990s, the number of concierge physicians has been rising but remains small. We were able to locate 146 concierge physicians in the United States as of 2004—a small number compared with the more than 470,000 physicians who regularly submitted claims to Medicare in 2003. The 146 concierge physicians we identified practiced in 25 states, with the greatest numbers in metropolitan areas on the East and West Coasts. California had the highest number, with 26 concierge physicians, followed by Florida with 22, Washington with 21, and Massachusetts with 17. We identified 1 to 8 concierge physicians in 21 other states, though most of these other states had 5 or fewer. All but 2 of the concierge physicians we located practiced in metropolitan areas. We found the highest numbers of concierge physicians in the metropolitan statistical areas (MSA) of Seattle (19); Boston (17); and West Palm Beach–Boca Raton, Florida (13). Figure 1 presents the locations of 144 concierge physicians we identified who practiced in MSAs throughout the nation. The number of physicians practicing concierge care has increased in recent years. Among the 112 concierge physicians who responded to our survey, the cumulative total number practicing concierge care has increased by more than 10 times in the past 5 years (see fig. 2). About two-thirds of the responding physicians reported that they began to practice concierge care in 2003 or later. The number of responding physicians starting to practice concierge care rose each year after 2000, except in 2004, although we did not include physicians who began practicing concierge care after October 2004. Nearly all of the physicians who responded to our survey reported practicing primary care and most were not new to medical practice. Physicians reported practicing the primary care disciplines of internal medicine (about three-fourths of respondents) and family practice (about one-fourth of respondents). Survey respondents reported being in various stages in their medical careers, from relatively new to practice to decades of experience. More than two-thirds reported having been in medical practice for 15 years or more. The average length of time in medical practice was 19 years, and about one-fourth of the respondents reported being in practice for 25 years or more. See appendix II for additional information provided by survey respondents. Concierge physicians responding to our survey reported a variety of practice characteristics. These included the amount charged to be a concierge patient, practice size, features offered, whether they billed patient health insurance, and their relationship to the Medicare program. The annual membership fee for an individual to join a concierge practice ranged from $60 to $15,000 among the physicians responding to our survey. As shown in figure 3, more than 80 percent of respondents reported annual fees from $500 to $3,999; the most frequently reported annual fee was $1,500. Three-fourths of our respondents reported that they waived the membership fee for some of their concierge patients. About one in eight of these physicians reported waiving the fees for 20 percent or more of their concierge patients. Concierge physicians responding to our survey reported, on average, 491 patients under their care as of October 2004—significantly fewer than the average of 2,716 patients they reported for the year before beginning to practice concierge care. Of the total patients they reported in October 2004, an average of 326 were concierge patients—that is, patients who either paid the membership fee or had the fee waived, and were offered the enhanced services or amenities associated with membership. Nearly two-thirds of responding physicians reported having fewer than 400 concierge patients (see fig. 4). Concierge physicians also reported seeing fewer patients per day: the average number of patients physicians reported seeing on a typical day fell to 10 in October 2004 from 26 in the year before they began practicing concierge care. Many respondents reported that they were still establishing their concierge practices and had set targets for the number of concierge patients in their care. Respondents reported target numbers for concierge patients ranging from 10 to 1,300; the two most frequently reported goals were 300 and 600 concierge patients (reported by 23 and 30 respondents, respectively). About 80 percent of respondents reported that they had not yet reached their target number of concierge patients as of October 2004. About 1 in 2 of the respondents who began concierge care in 2001 or earlier reported having met their goal for the number of concierge patients in their practices, compared with about 1 in 7 of those who reported starting their concierge practices on or after January 1, 2002. Concierge physicians may continue, for various reasons, to treat some nonconcierge patients. Thirty-six, about one-third of survey respondents, reported that their individual practices included some nonconcierge patients, while about two-thirds had practices consisting entirely of concierge patients. Physicians who continued to see nonconcierge patients reported doing so for various reasons: to ensure continuity of care for patients who did not join the concierge practice, to maintain a combined concierge and conventional practice, or to see patients as part of a subspecialty practice. Less frequently reported situations in which respondents reported seeing nonconcierge patients included seeing family members of their concierge patients occasionally as a courtesy or when urgent needs arose, and covering for other doctors who were out of town. The concierge physicians responding to our survey reported offering a variety of features, some of which were offered by nearly all the respondents, others by relatively few (see table 2). The most frequently reported features were same- or next-day appointments for nonurgent care, 24-hour telephone access, and periodic preventive-care physical examinations. When asked to list the most important features of concierge care that were not routinely available to their nonconcierge patients, respondents most frequently cited features related to increased time spent with patients, direct patient access to the physician at any time, same- or next-day appointments, and comprehensive preventive and wellness care. Concierge physicians responding to our survey reported different ways of interacting with patient health insurance and the Medicare program. Eighty-five, approximately three-fourths, of respondents reported that they billed patient health insurance for covered services. Of these 85 physicians, 79 reported they billed Medicare and 6 reported they did not. About one- fourth of the concierge physicians responding to our survey reported that they did not submit any claims to patient health insurance, including Medicare. About three-fourths of our survey respondents reported that they were Medicare participating physicians, and about one-fifth had opted out of Medicare as of October 2004 (see fig. 5). Nationwide, relatively few physicians—approximately 3,000 in 2004—have opted out of the Medicare program. Two principal aspects of concierge care are of interest to the Medicare program and its beneficiaries: its compliance with Medicare requirements and its effect on beneficiary access to physician services. HHS has established general policy on concierge care and alerted physicians to areas of potential noncompliance. Although concierge physicians have followed various strategies to ensure compliance with Medicare requirements, most physicians responding to our survey indicated more HHS guidance would be helpful. Available measures of access to care as of 2004, while not directly addressing concierge care, indicate that Medicare beneficiary access to physician services has been good. The small number of concierge physicians makes it unlikely that the approach has contributed to widespread access problems. HHS has established general policy on concierge care and has alerted physicians to areas of potential noncompliance. Concierge physicians have expressed the need for additional guidance and have taken various steps— such as structuring their practices in an attempt to avoid associating their membership fees with Medicare-covered services or opting out of Medicare—to avoid compliance problems. CMS outlined its position on concierge care in a March 2002 memorandum to CMS regional offices that CMS officials told us remains current as of June 2005. The memorandum states that physicians may enter into retainer agreements with their patients as long as these agreements do not violate any Medicare requirements. For example, concierge care membership fees may constitute prohibited additional charges if they are for Medicare- covered items or services. If so, a physician who has not opted out of Medicare would be in violation of the limits on what she or he may charge patients who are Medicare beneficiaries. HHS OIG has addressed the consequences of noncompliance with Medicare billing requirements. In March 2004, HHS OIG issued an alert “to remind Medicare participating physicians of the potential liabilities posed by billing Medicare patients for services that are already covered by Medicare.” The alert stated that “charging extra fees for already covered services abuses the trust of Medicare patients by making them pay again for services already paid for by Medicare.” As an example, the alert referred to a Minnesota physician who paid a settlement and agreed to stop offering personal health care contracts to patients for annual fees of $600. According to HHS OIG, these contracts included at least some services that were already covered and reimbursable by Medicare. The alert advised participating physicians that they could be subject to civil monetary penalties if they requested payment from Medicare beneficiaries for those services in addition to the relevant deductibles and coinsurance charged for these services. In addition, the alert noted that nonparticipating physicians may also be subject to penalties for overcharging beneficiaries for covered services. Unless a concierge physician opts out of Medicare, the question of Medicare coverage is central to whether a concierge care agreement complies with the program’s limits on patient charges. HHS OIG’s March 2004 alert provided three examples of services offered by the physician in Minnesota: coordination of care with providers, a comprehensive assessment and plan for optimum health, and extra time spent on patient care. HHS OIG did not indicate which, if any, of those three services were already covered by Medicare. The resulting uncertainty, about which features of the Minnesota physician’s concierge agreement formed the basis for HHS OIG’s allegation that he violated the Medicare program’s prohibition against charging beneficiaries more than the applicable deductible and coinsurance, generated concern among some concierge physicians. According to HHS OIG officials, HHS OIG has not issued more detailed guidance on concierge care because its role in this area is to carry out specific delegated enforcement authorities, not to make policy. HHS OIG addresses each situation in its specific context. Physicians with questions about their own concierge care agreements may obtain guidance specific to them from HHS by requesting an advisory opinion. HHS OIG’s Industry Guidance Branch issues advisory opinions on matters that fall within its enforcement authority. It covers provisions of Medicare law that prohibit knowingly presenting a beneficiary with a request for payment in violation of a physician’s participation agreement. Consequently, any participating physician who operates or is considering starting a concierge practice could request an advisory opinion. Advisory opinions are legally binding on HHS and the requesting party as long as the arrangement is consistent with the facts provided. The process involves a written request that meets certain requirements, plus a fee. Advisory opinions are not available for hypothetical situations, “model” situations, or general questions of interpretation. Officials with HHS OIG reported that as of May 2005, the Industry Guidance Branch had received very few inquiries regarding advisory opinions about concierge care agreements, and no opinions have been issued on this subject. Most of the physicians who responded to our survey indicated that more guidance from HHS on how Medicare requirements might affect concierge care is needed. Although about one-fourth of respondents said that the information available from HHS was clear and sufficient, more than half reported that it was not. Of those who reported that the guidance was not clear and sufficient, about one-third stated that information was available from other sources, including private attorneys, the Society for Innovative Medical Practice Design, and concierge care consultants (see table 3). Medicare compliance is an important consideration in how concierge physicians set up their practices. For example, concierge physicans should avoid including services covered by Medicare in their concierge agreements to ensure that no additional charges are associated with those services. Different strategies have been undertaken to accomplish this. One such strategy emphasizes the convenience and availability of concierge physicians as the primary benefit of membership. Another strategy is to focus on preventive care, linking the membership payment only to screening that Medicare does not cover. Some concierge physicians opt out of Medicare, thus avoiding potential compliance problems; opting out requires physicians to forgo all Medicare reimbursement for 2 years. Most of the concierge physicians responding to our survey reported having patients who were Medicare beneficiaries; however, the numbers of beneficiary patients they reported as part of their concierge and previous nonconcierge practices are very small compared to the more than 40 million Medicare beneficiaries. Surveys and national sources of information on beneficiary access to care do not address the impact of concierge care directly. In the absence of direct measures of the impact of concierge care on Medicare beneficiaries’ access to physician services, we reviewed available nationwide data and other indicators about beneficiaries’ experiences overall. These sources showed that overall access to physician services has not changed substantially in recent years. Estimates provided by 105 of the respondents indicated that about two- thirds of the estimated 19,400 Medicare beneficiaries who were patients of these physicians in October 2004 were considered concierge patients. The rest were nonconcierge patients who were neither charged a fee nor offered enhanced services. Physicians who continued to see nonconcierge patients reported doing so for various reasons, including to ensure continuity of care for individuals who had not yet found a new physician and to maintain a practice consisting of both concierge and nonconcierge patients. On average, Medicare beneficiaries represented about 35 percent of the total number of patients—concierge and nonconcierge—that responding concierge physicians reported having in their care as of October 2004. Eight of the 105 physicians who provided this information reported having no Medicare beneficiaries in their practices at all; 36 reported treating some, but fewer than 100 Medicare beneficiaries among their patients; and 12 reported having 400 or more Medicare beneficiaries under their care (see fig. 6). Concierge physicians who responded to our survey reported that, on average, Medicare beneficiaries in their previous nonconcierge practices joined their concierge practices in about the same proportion as their patients overall. When physicians begin practicing concierge care, existing patients may choose not to become concierge patients. Patient counts provided by responding physicians indicate that, on average, Medicare and non-Medicare patients who were under their care before they began concierge care chose to join as concierge patients in roughly similar proportions. Table 4 shows the average numbers of Medicare and non-Medicare patients responding physicians reported were in their practices before and after their conversion to concierge care. The numbers of beneficiaries that responding concierge physicians reported in their practices are relatively small—for example, the total number of Medicare beneficiaries that 88 responding physicians reported treating before conversion to concierge care was fewer than 100,000—compared to the nation’s more than 40 million Medicare beneficiaries. Respondents reported engaging in a variety of activities to help Medicare beneficiaries choosing not to join the physician’s concierge practice find new physicians. These activities included designating a staff person to help with transition questions, referring patients to other physicians within a group practice, calling new physicians to discuss a patient’s medical history, and remaining available to treat all patients until they had found a new primary care physician. Additional activities reported include bringing a new physician into the practice to take on the concierge physician’s previous patients and speaking individually with each patient. We did not contact Medicare beneficiary patients of the concierge physicians in our survey to determine how many of them had sought or found new physicians. See appendix II for additional details on actions physicians reported taking to help Medicare patients who did not join their concierge practices to find new physicians. The number of concierge physicians, and the number of Medicare beneficiaries the physicians reported in their previous nonconcierge practices, are relatively small, and therefore national surveys of samples of Medicare beneficiaries are not likely to include many beneficiaries who come into contact with concierge care. In the absence of data to directly assess the impact of concierge care on Medicare beneficiaries’ access, however, national surveys can provide general information about the availability of physicians and beneficiary access to care. Overall, national surveys showed that Medicare beneficiary access to physician services has been good, in some cases better than access for individuals with private health insurance. Surveys targeting both Medicare beneficiaries and physicians revealed that overall access to physician services has not changed substantially in recent years. Most beneficiaries surveyed reported that they have not had a problem finding a primary care physician. Of those who did report a problem, only a small percentage attributed their difficulty to physicians’ refusing to take new Medicare patients. Most beneficiaries attributed problems to transportation barriers or their difficulty finding a physician they liked, not to a shortage of primary care physicians who accepted Medicare. Of physicians surveyed, most reported accepting at least some new Medicare patients. Analysis done by the Medicare Payment Advisory Commission of Medicare claims data also revealed that the number of physicians who treated Medicare patients grew at a more rapid pace than the Medicare beneficiary population from 1999 to 2003. Results from our review of Medicare claims data from April 2000 and April 2002 indicated increases throughout the country in both the percentage of beneficiaries who received physician services and the number of services provided to beneficiaries who were treated. Physician supply data from the Seattle, Boston, and Southeast Florida metropolitan areas, where we found concierge care is relatively prominent, suggested that physicians there were relatively plentiful. The ratio of physicians to overall population in each of these metropolitan areas exceeded the nationwide average for all metropolitan areas in 2001. Because concierge physicians treat fewer patients than do physicians in conventional practices, a community needs other available physicians to take on Medicare beneficiaries who choose not to join a concierge practice. Even in communities where the concierge physician population was largest, however, the number of concierge physicians we identified was small compared with the physician population as a whole. CMS officials informed us that CMS has not established a special tracking system for beneficiary complaints about concierge care because the practice is not sufficiently widespread to raise concerns about access to care. Similarly, officials with call centers for 1-800-MEDICARE and CMS contractors handling beneficiary inquiries and complaints reported that they have received a small number of calls from beneficiaries about concierge care. Because of the low volume of calls on this subject, the majority of these call centers do not have tracking codes for responses to calls about concierge care. Of the 15 CMS contractors who process claims for physician services and responded to our inquiry, only 1 reported establishing a code to track concierge care inquiries. This contractor established the tracking code in response to our inquiry about concierge care in February 2005. As of April 2005, none of this contractor’s call centers reported receiving any beneficiary calls about concierge care. Because of the relatively high number of concierge physicians in the Seattle metropolitan area, CMS’s Seattle regional office has been following concierge care, but so far it has not identified an impact in Medicare beneficiaries’ access to care. The Seattle office’s efforts are part of an agencywide effort to monitor beneficiary access to care through reports in the media and from the CMS divisions that interact with beneficiaries. According to CMS officials in the agency’s Seattle regional office, that office has received a small number of calls about concierge care from physicians and beneficiaries, mainly asking whether concierge care is permitted under Medicare law. Seattle regional office officials said they respond in accordance with CMS guidelines: they do not review specific concierge care agreements but help beneficiaries by providing a list of local physicians who participate in Medicare. The CMS Seattle regional office has not found indications that beneficiaries who choose not to pay their physician’s membership fees have had problems locating new primary care physicians. We did not contact Medicare beneficiaries who were patients of physicians who converted to concierge care to determine how many of them had sought or found new physicians. We did, however, contact organizations that Medicare beneficiaries might call with problems or concerns, including AARP and the Medicare Rights Center. Like CMS, officials with these organizations reported receiving a few calls from beneficiaries about concierge care, and none reported complaints from beneficiaries about finding a physician or about access to services because of concierge care. Officials with these groups also reported that they have not developed a formal system to track the issue. According to officials from these organizations, calls from beneficiaries about concierge care are usually requests for help interpreting the letters from their physicians explaining the physicians’ conversion to concierge care. Although the number of physicians practicing concierge care has grown in recent years, the total number remains very small. Available measures of Medicare beneficiaries’ overall access to care, while not directly addressing concierge care, indicate widespread availability of physicians to treat them. The small number of concierge physicians at the time of our review, along with information from available measures of access to services, suggests that concierge care does not present a systemic access problem for Medicare beneficiaries at this time. We provided a draft of this report for comment to HHS. In its comments, HHS agreed that concierge care has had a minimal impact on beneficiary access to physician services at this time. HHS noted, however, that the agency is interested in developments in concierge care and will continue to follow this area and to evaluate whether any further steps are indicated. See appendix III for HHS’s written comments. HHS also provided technical comments, which we incorporated where appropriate. We also provided a draft to the Society for Innovative Medical Practice Design, formerly the American Society of Concierge Physicians, which had no comments. We are sending copies of this report to the Secretary of HHS, the Inspector General of HHS, the Administrator of CMS, and appropriate congressional committees. We will also provide copies to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7119 or steinwalda@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To obtain information on the characteristics of concierge care, we surveyed concierge physicians about their practices and the types of services and financial arrangements they offer. Because no comprehensive directory of concierge physicians was available, we compiled our own list of concierge physicians to survey. We focused our survey on physicians who, as of October 2004, (1) had established a direct financial relationship with patients in the form of a membership or retainer fee and (2) provided enhanced services or amenities, such as same-day appointments or preventive services not covered by patient health insurance. We identified concierge physicians through a variety of methods, including a nationwide literature search, telephone interviews, and referrals from other concierge physicians. With the assistance of a contractor, we compiled an initial list of potential survey participants, contacted them to confirm that they met the criteria for inclusion in our survey, and requested referrals to additional concierge physicians. We used a variety of sources to establish our initial list of potential survey participants, including a nationwide Internet search of articles in newspapers, business journals, and medical publications; attendance at the first annual meeting of the American Society of Concierge Physicians (now known as the Society for Innovative Medical Practice Design); and a list of physicians affiliated with a consulting firm that helps physicians establish and maintain concierge practices. This process yielded a final mailing list of 187 individuals. We mailed the questionnaires in November 2004, after pretesting it with concierge physicians and incorporating suggestions from several reviewers familiar with concierge care; we followed up with nonrespondents during December 2004 and January 2005. Two questionnaires were returned as undeliverable; we removed those names from our total count of potential concierge physicians. The total we used to calculate the response rate for our survey was therefore 185. We received responses to our survey from 129 physicians, yielding an overall response rate of 70 percent. Of the respondents, 112 physicians confirmed that they practiced concierge care—that is, they reported that they charged a retainer or membership fee for enhanced services or amenities—as of October 2004. We analyzed only the information provided by these 112 physicians. Because these 112 respondents were not randomly sampled from a larger population of known concierge physicians, the information they provided cannot be projected to any other concierge physicians. We did not attempt to verify the accuracy of their responses. In addition to the 112 physicians practicing concierge care in October 2004 and responding to our survey, we confirmed—through, for example, telephone interviews conducted by us or our contractor—the concierge status of an additional 34 physicians who did not return our questionnaire. This process yielded a total of 146 confirmed concierge physicians. To analyze the geographic practice locations of these 146 physicians, we assigned the physicians’ zip codes to larger geographic units called metropolitan statistical areas (MSA) or primary metropolitan statistical areas (PMSA), as defined in 1999 by the Office of Management and Budget. To review the aspects of concierge care of interest to the Medicare program and its beneficiaries, we reviewed relevant provisions of Medicare law and documents from the Department of Health and Human Services (HHS), including Centers for Medicare & Medicaid Services (CMS) policy manuals and internal memorandums, information posted on the CMS Web site, an alert published by the HHS Office of Inspector General (OIG), and correspondence between interested parties and HHS officials regarding concierge care. We also interviewed CMS officials at CMS headquarters and in the Seattle regional office, officials with HHS OIG, and concierge physicians and their representatives and, in our survey, asked concierge physicians for their views on the guidance available from HHS on concierge care. To assess what is known about how concierge care might affect Medicare beneficiary access to physician services, we reviewed national surveys and reports on overall Medicare beneficiary access. Because so few physicians and beneficiaries are affected by concierge care, concierge physicians or their patients are unlikely to be randomly chosen to participate in surveys on access to physicians by Medicare beneficiaries. National surveys and analysis on beneficiary access to physician services are also not sufficiently detailed to address concierge care, but they can provide information about physician availability and beneficiary access to care overall. The sources we consulted targeted beneficiaries, physicians, or both and included the following: Bernard, Shulamit, et al. Medicare Fee-for-Service National Implementation Subgroup Analysis. Prepared for the Centers for Medicare & Medicaid Services. Research Triangle Park, N.C.: Research Triangle Institute, 2003. Center for Studying Health System Change. Community Tracking Study (CTS) Section Map. Washington, D.C.: October 2004. http://www.hschange.org/index.cgi?data=10 (downloaded October 2004). Centers for Medicare & Medicaid Services. Medicare Current Beneficiary Survey. Baltimore, Md.: September 2004. http://www.cms.hhs.gov/MCBS/default.asp (downloaded October 2004). GAO. Medicare Fee-for-Service Beneficiary Access to Physician Services: Trends in Utilization of Services, 2000 to 2002. GAO-05- 145R. Washington, D.C.: January 12, 2005. Lake, Timothy, et al. Results from the 2003 Targeted Beneficiary Survey on Access to Physician Services among Medicare Beneficiaries. Prepared for the Centers for Medicare and Medicaid Services. Cambridge, Mass.: Mathematica Policy Research, Inc., 2004. Medicare Payment Advisory Commission. Report to the Congress: Medicare Payment Policy. Washington, D.C.: 2005. Schoenman, Julie, et al. 2002 Survey of Physicians about the Medicare Program. Prepared for the Medicare Payment Advisory Commission. Bethesda, Md.: Project HOPE Center for Health Affairs, 2003. Because concierge physicians generally treat fewer patients than physicians in conventional practices, we assessed community-level data on physician supply to see if other physicians might be available to take on Medicare beneficiaries who choose not to join a concierge practice. We calculated physician-to-population ratios in communities where we found the highest numbers of concierge physicians and compared them to the average ratio for all metropolitan areas in the United States. To calculate this ratio, we used data from a 2003 HHS Health Resources and Services Administration database known as the Area Resource File. This database included county-level data on active, nonfederal, office-based, patient-care physicians from the 2001 American Medical Association Physician Masterfile database and county-level resident population data from the U.S. Census Bureau for 2001, which we aggregated by MSA and PMSA. We did not contact Medicare beneficiaries who were patients of physicians who converted to concierge practices. We obtained information from organizations likely to receive calls from Medicare beneficiaries to determine whether individual beneficiaries were reporting concerns about concierge care or difficulty finding new physicians. We obtained and analyzed information from officials at CMS, call centers for 1-800- MEDICARE, and 15 of 18 CMS contractors that process Medicare claims for outpatient physician services. We spoke with representatives of AARP, the American Bar Association’s Commission on Law and Aging, the Center for Medicare Advocacy, the Health Assistance Partnership of Families USA, and the Medicare Rights Center. We conducted our work in accordance with generally accepted government auditing standards from May 2004 through July 2005. This appendix summarizes the results from questions we asked physicians who practiced concierge care as of October 2004. We sent surveys to 185 physicians with valid addresses whom we had identified as potential concierge physicians. We obtained responses from 129 individuals, for an overall response rate of 70 percent, and analyzed the responses from 112 physicians who practiced concierge care in October 2004. The following tables and figures present information on reported characteristics of the 112 concierge physicians who responded to our survey and their practice settings (table 5), the estimated number of patients in their individual practices (table 6), goals for the total number of concierge patients when physicians’ practices are fully established (fig. 7), annual membership fees charged by physicians who did and did not bill insurance (fig. 8), actions concierge physicians reported taking to help Medicare beneficiaries who did not join their concierge practices find new physicians (table 7), concierge physicians’ views on the sufficiency of HHS guidance on concierge care and Medicare (table 8), and concierge physicians’ views on remaining in medical practice and treating Medicare beneficiaries if concierge care were not an option (table 9). In addition to the person named above, key contributors to this report were Kim Yamane, Assistant Director; Ellen W. Chu; Jennifer DeYoung; Linda Y. A. McIver; Perry G. Parsons; Suzanne C. Rubins; Craig Winslow; and Suzanne Worth.
Concierge care is an approach to medical practice in which physicians charge their patients a membership fee in return for enhanced services or amenities. The recent emergence of concierge care has prompted federal concern about how the approach might affect beneficiaries of Medicare, the federal health insurance program for the aged and some disabled individuals. Concerns include the potential that membership fees may constitute additional charges for services that Medicare already pays physicians for and that concierge care may affect Medicare beneficiaries' access to physician services. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 directed GAO to study concierge care and its relationship to Medicare. Using a variety of methods, including a nationwide literature search and telephone interviews, GAO identified 146 concierge physicians and surveyed concierge physicians in fall 2004. GAO analyzed responses from 112 concierge physicians. GAO also reviewed relevant laws, policies, and available data on access to physician services and interviewed officials at the Department of Health and Human Services (HHS) and representatives of Medicare beneficiary advocacy groups. Concierge care is practiced by a small number of physicians located mainly on the East and West Coasts. Nearly all of the 112 concierge physicians responding to GAO's survey reported practicing primary care. Annual patient membership fees ranged from $60 to $15,000 a year, with about half of respondents reporting fees of $1,500 to $1,999. The most often reported features included same- or next-day appointments for nonurgent care, 24-hour telephone access, and periodic preventive care examinations. About three-fourths of respondents reported billing patient health insurance for covered services and, among those, almost all reported billing Medicare for covered services. Two principal aspects of concierge care are of interest to the Medicare program and its beneficiaries: compliance with Medicare requirements and its effect on beneficiary access to physician services. HHS has determined that concierge care arrangements are allowed as long as they do not violate any Medicare requirements; for example, the membership fee must not result in additional charges for items or services that Medicare already reimburses. Some concierge physicians reported to GAO that they would like more HHS guidance. The small number of concierge physicians makes it unlikely that the approach has contributed to widespread access problems. GAO's review of available information on beneficiaries' overall access to physician services suggests that concierge care does not present a systemic access problem among Medicare beneficiaries at this time. In comments on a draft version of this report, HHS agreed with GAO's finding on concierge care's impact on beneficiary access to physician services and indicated it will continue to follow developments in this area.
Internet access became widely available to residential users by the mid 1990s. For a few years, the primary mechanism to access the Internet was a dial-up connection, in which a standard telephone line is used to make an Internet connection. A dial-up connection offers data transmission speeds up to 56 kilobits per second (Kbps). Broadband, or high-speed, Internet access became available by the late 1990s. Broadband differs from a dial-up connection in certain important ways. First, broadband connections offer a higher-speed Internet connection than dial-up—for example, some broadband connections offer speeds exceeding 1 million bits per second (Mbps) both upstream (data transferred from the consumer to the Internet service provider) and downstream (data transferred from the Internet service provider to the consumer). These higher speeds enable consumers to receive information much faster and thus enable certain applications to be used and content to be accessed that might not be possible with a dial- up connection. Second, broadband provides an “always on” connection to the Internet, so users do not need to establish a connection to the Internet service provider each time they want to go online. Consumers can receive a broadband connection to the Internet through a variety of technologies. These technologies include, but are not limited to, the following: Cable modem. Cable television companies first began providing broadband service in the late 1990s over their hybrid-fiber coaxial networks. When provided by a cable company, broadband service is referred to as cable modem service. Cable providers were upgrading their infrastructure at that time to increase their capacity to provide video channels in response to competition from direct broadcast satellite (DBS) providers such as DirecTV® and Dish Network. By also redesigning their networks to provide for two-way data transmission, cable providers were able to use their systems to provide cable modem service. Cable modem service is primarily available in residential areas, and although the speed of service varies with many factors, download speeds of up to 6 Mbps are typical. Cable providers are developing even higher speed services. DSL. Local telephone companies provide digital subscriber line (DSL) service, another form of broadband service, over their telephone networks on capacity unused by traditional voice service. Local telephone companies began to deploy DSL service in the late 1990s— some believe, in part, as a response to the rollout of cable modem service. To provide DSL service, telephone companies must install equipment in their facilities and remove devices on phone lines that may cause interference. While most residential customers receive asymmetric DSL (ADSL) service with download speeds of 1.5 to 3 Mbps, ADSL technology can achieve speeds of up to 8 Mbps over short distances. Newer DSL technologies can support services with much higher download speeds. Satellite. Currently, three providers of satellite service can offer nearly ubiquitous broadband service in the United States. These providers use geosynchronous satellites that orbit in a fixed position above the equator and transmit and receive data directly to and from subscribers. Signals from satellites providing broadband service can be accessed as long as the user’s reception dish has a clear view of the southern sky. Therefore, while the footprint of the providers’ transmission covers most of the country, a person living in an apartment with windows only facing north, or a person living in house in a heavily wooded area might not be able to receive Internet access via satellite. Earlier Internet services via satellite could only receive Internet traffic downstream— that is, from the satellite to the subscriber—and upstream Internet traffic was transmitted through a standard telephone line connection. Currently, however, satellite companies provide both upstream and downstream connections via satellite, eliminating the need for a telephone line connection and speeding the overall rate of service. Transmission of data via satellite typically adds one-half to three-fourths of a second, causing a slight lag in transmission and rendering this service less well-suited for certain applications over the Internet. While satellite broadband service may be available throughout the country, the price for this service is generally higher than most other broadband modes; both the equipment necessary for service and recurring monthly fees are generally higher for satellite broadband service, compared with most other broadband transmission modes. Wireless. Land-based, or terrestrial, wireless networks can offer a broadband connection to the Internet from a wide variety of locations and in a variety of ways. Some services are provided over unlicensed spectrum and others over spectrum that has been licensed to particular companies. In licensed bands, some companies are offering fixed wireless broadband throughout cities. Also, mobile telephone carriers—such as the large companies that provide traditional cell phone service—have begun offering broadband mobile wireless Internet service over licensed spectrum—a service that allows subscribers to access the Internet with their mobile phones or laptops as they travel across cities where their provider supports the service. Such services are becoming widely deployed and are increasingly able to offer high-speed services. A variety of broadband access technologies and services are also provided on unlicensed spectrum— that is, spectrum that is not specifically under license for a particular provider’s network. For example, wireless Internet service providers generally offer broadband access in particular areas by placing a network of antennae that relay signals throughout the network. Subscribers place necessary reception equipment outside their homes that will transmit and receive signals from the nearest antenna. Also, wireless fidelity (Wi-Fi) networks—which provide broadband service in so-called “hot spots,” or areas up to 300 feet—can be found in cafes, hotels, airports, and offices. Some technologies, such as Worldwide Interoperability for Microwave Access (WiMAX), can operate on either licensed or unlicensed bands, and can provide broadband service up to approximately 30 miles in a line-of-sight environment. Under section 706 of the Telecommunications Act of 1996, Congress directs FCC to encourage deployment of advanced telecommunications capability, which includes broadband, to all Americans. In implementing the act, FCC has treated the two most widely available broadband services—cable modem and DSL service—as information services having a telecommunications component. FCC’s approach of not treating such services as telecommunications services has important legal implications because a service defined as a telecommunications service could be subject to regulation under Title II of the Communications Act, which imposes substantial common carrier regulations unless the commission choose to forebear from their enforcement. As part of its responsibilities, FCC periodically issues a report to Congress on the status of advanced telecommunication capability in the United States. To prepare this report, FCC developed a periodic reporting requirement using Form 477. In November 2004, FCC modified its rules regarding the filing of the 477 form, which went into effect for the companies’ second filing in 2005. Specifically, FCC removed existing reporting thresholds, and companies are now required to report their total state subscribership by technology. We found that in 2005, about 30 million American households—or 28 percent—subscribed to broadband, although households in rural areas were less likely to subscribe to broadband service than were households in urban and suburban areas. Households appear to subscribe to cable modem and DSL services—the two primary broadband services—in approximately equal numbers. FCC requires providers of broadband service to report on the geographic areas in which they serve subscribers, but these data are sometimes used to infer the status of deployment of companies’ Internet infrastructure. Some stakeholders find FCC data collection efforts useful for comparison of adoption of broadband across states, but we found that the data may not be as useful for understanding the status of broadband deployment across the country. Based on survey data from 2005, we found that 28 percent of American households subscribe to broadband service. Figure 1 illustrates how American households access the Internet, by various technologies, and also shows the percentage of households that do not own a computer. As shown, 30 percent of American households subscribe to dial-up access, and about 41 percent of American households do not have an Internet connection from home. Of those households that do not access the Internet, more than 75 percent do not have a computer in the home, while the remaining households own a computer but do not have online access. Among online households, we found 50 percent subscribe to dial-up service, and 48 percent subscribe to a broadband service. Additionally, we found that of those households subscribing to a broadband service, roughly half purchase DSL service and half purchase cable modem service. (See fig. 2 for the types of connections purchased by online households.) Finally, we found that households residing in rural areas were less likely to subscribe to broadband service than were households residing in suburban and urban areas. Seventeen percent of rural households subscribe to broadband service, while 28 percent of suburban and 29 percent of urban households subscribe to broadband service. (See fig. 3 for the percentage of urban, suburban, and rural households purchasing broadband service.) We also found that rural households were slightly less likely to connect to the Internet, compared with their counterparts in suburban areas. In order to fulfill its responsibility under section 706 of the Telecommunications Act, FCC collects data on companies’ broadband operations. In early 2004, FCC initiated a proceeding to examine whether it should collect more detailed information for its broadband data gathering program than had previously been collected. Specifically, FCC asked whether it should do several things to enhance the broadband data including (1) requiring providers to report the speeds of their broadband services, (2) eliminating the reporting threshold such that all providers of broadband—no matter how small—must report to FCC on its services, and (3) requiring that providers report the number of connections by zip code. In late 2004, FCC released an order in which it decided to require all providers—no matter how small—of broadband to report in the 477 data collection effort on broadband and also required providers to report information about their services within speed tier categories. The commission decided not to require providers to report the number of connections (or subscribers) that they serve within each zip code or the number of connections in speed tiers or by technology within each zip code, finding that finding that such a requirement would impose a large burden on filers (particularly smaller entities), and would require significant time to implement. In particular, several providers commented in the 2004 proceeding that it would be costly and burdensome to develop the software and systems to generate the detailed zip code-level data and that the cost and burden of further reporting requirements would likely outweigh the benefits of more substantial information on broadband deployment in the United States. On the other hand, 3 state utility commissions asked FCC to gather more information within zip codes or by some other Census boundary because such information is, in their view, important for tracking broadband availability. Based on the modifications to the filing requirements FCC implemented, FCC collects, through its Form 477 filings, information on several aspects of each company’s provision of broadband at the state level, such as the total number of subscribers served, the breakdown of how those subscribers are served by technology, and estimates within each technology of the percentage of subscribers that are residential. For each technology identified in the state reporting, providers also submit a list of the zip codes in which they serve at least one customer. As discussed above, companies do not report the number of subscribers served or whether subscribers are business or residential within each zip code; they also do not report information on the locations within the zip code that they can serve. In July 2005, FCC found that 99 percent of the country’s population lives in the 95 percent of zip codes where at least one provider reported to FCC that it serves at least one high-speed subscriber as of December 31, 2004. In 83 percent of the nation’s zip codes, FCC noted that subscribers are served by more than one provider, and the commission noted that for roughly 40 percent of zip codes in the United States, there are five or more providers reporting high-speed lines in service. Although these data indicate that broadband availability is extensive, we found that FCC’s 477 data may not be useful for assessing broadband deployment at the local level. While FCC, in general, notes that the 477 zip-code data are not meant to measure deployment of broadband, in its July 2005 report, the commission states that in order to be able to evaluate deployment, the commission “instituted a formal data collection program to gather standardized information about subscribership to high speed services. . . .” (Emphasis added. ) Based on our analysis, we found that collecting data about where companies have subscribers may not provide a clear depiction of their deployment, particularly in the context of understanding the availability of broadband for residential users. One quandary in analyzing broadband deployment is how to consider the availability of satellite broadband services. Even though broadband over satellite may not be seen by some as highly substitutable for other broadband technologies because of certain technical characteristics or because of its higher cost, satellite broadband service is deployed: Three companies have infrastructure in place to provide service to most of the country. The actual purchase of satellite broadband is scattered throughout the country and shows up in FCC’s 477 zip-code data only where someone actually purchases the service. It is not clear how satellite service should be judged in terms of deployment. Since it is available throughout the entire country, one view could be that broadband is near fully deployed. Alternatively, it could be viewed that satellite broadband— while available in most areas—does not reflect localized deployment of broadband infrastructure and should therefore not be counted as a deployed broadband option at all. In either case, FCC’s zip-code data on satellite broadband—which is based on the pattern of scattered subscribership to this service—does not seem to be an appropriate indicator of its availability. Aside from the question of how to view satellite deployment, other issues arise in using subscribership indicators for wire or wireless land-based providers at the zip-code level as an indicator of deployment. These issues include the following: Because a company will report service in a zip code if it serves just one person or one institution in that zip code, stakeholders told us that this method may overstate deployment in the sense that it can be taken to imply that there is deployment throughout the zip code even if deployment is very localized. We were told this issue might particularly occur in rural areas where zip codes generally cover a large geographic area. Based on our own analysis, we found, for example, that in some zip codes more than one of the large established cable companies reported service. Because such providers rarely have overlapping service territories, this likely indicates that their deployment was not zip-code-wide and that the number of providers reported in the zip code overstates the level of competition to individual households. We also found that a nontrivial percentage of households lie beyond the 3-mile radius of their telephone central office, indicating that DSL service was unlikely to be available to these homes. Companies report service in a zip code even if they only serve businesses. One academic expert we interviewed expressed a concern about this issue. Based on our own analysis, we found that many of the companies filing 477 data indicating service in particular zip codes only served business customers. As such, the number of providers reported as serving many zip codes is likely overstated in terms of the availability of broadband to residences. FCC requires that companies providing broadband using unbundled network elements (UNE) report their broadband service in the zip code data. When a provider serves customers using UNEs, they purchase or lease underlying telecommunications facilities from other providers—usually incumbent telephone companies—to serve their customers. Having these providers report their subscribers at the state level is important to ensure that correct numbers on the total subscribers of broadband service is obtained. However, while UNE providers may make investments in infrastructure, such as in collocation equipment, they do not generally own or provide last-mile connectivity for Internet access. Thus, counting these providers in the zip-code-level data may overstate the extent of local infrastructure deployment in the sense that several reporting providers could be relying on the same infrastructure, owned by the incumbent telephone company, to provide broadband access. Based on our analysis, we believe that the use of subscriber indicators at the zip-code level to imply availability, or deployment, may overstate terrestrially based deployment. We were able to check these findings for one state—Kentucky—where ConnectKentucky, a state alliance on broadband, had done an extensive analysis of its broadband deployment. ConnectKentucky officials shared data with us indicating that approximately 77 percent of households in the state had broadband access available as of mid-2005. In contrast, we used population data within all zip codes in Kentucky, along with FCC’s 477 zip-code data for that state, and determined that, according to FCC’s data, 96 percent of households in Kentucky live in zip codes with broadband service at the end of 2004. Thus, based on the experience in Kentucky, it appears that FCC’s data may overstate the availability and competitive deployment of nonsatellite broadband. Additionally, to prepare our econometric models, we relied on FCC’s 477 data to assess the number of providers serving the households responding to Knowledge Networks/SRI’s survey. Based on FCC’s data, we found that the median number of providers reporting that they serve zip codes where the households were located was 8; in 30 percent of these zip codes, 10 or more providers report that they provide service. Only 1 percent of respondents lived in zip codes for which no broadband providers reported serving at least one subscriber, according to FCC’s data. To better reflect the actual number of providers that each of the survey respondents had available at their residence, we made a number of adjustments to FCC’s provider count based on our analysis of the providers, certain geographic considerations, and information provided by the survey respondents. After making these adjustments, the median number of providers for the respondents fell to just 2, and we found that 9 percent of respondents likely had no providers of broadband at all. Despite these concerns about FCC’s 477 data, several stakeholders, including a state regulatory office and a state industry association, said they found FCC’s data useful. An official at a state governor’s office also noted that analysis of FCC data allowed them to make conclusions about the extent of deployment in their state. Similarly, an official in another governor’s office said that they use FCC’s data to benchmark the accessibility of broadband in their state because it is the only data available. A few academic experts also told us that they use FCC’s data. Several market characteristics appear to influence providers’ broadband deployment decisions. In particular, factors related to the cost of deploying and providing broadband services, as well as factors related to consumer demand, were critical to companies’ decisions about whether to deploy broadband infrastructure. At the same time, certain technical factors related to specific modes of providing broadband service influence how and where this service can be provided. Finally, a variety of federal and state government activities, as well as access to resources at the local level, have influenced the deployment of broadband infrastructure. As companies weigh investment decisions, they consider the likely profitability of their investments. In particular, because broadband deployment requires substantial investment, potential providers evaluate the cost to build and operate the infrastructure, as well as the likely demand—that is, the expected number of customers who will purchase broadband service at a given price—for their service. Based on our interviews, we found that several cost and demand factors influence providers’ deployment decisions. The most frequently cited cost factor affecting broadband deployment was the population density of a market. Many stakeholders, including broadband providers, state regulators, and state legislators, said population density—which is the population per square mile—was a critical determinant of companies’ deployment decisions. In particular, we were told that the cost of building a broadband infrastructure in areas where people live farther apart is much higher than building infrastructure to serve the same number of people in a more urban setting. As such, some stakeholders noted that highly rural areas—which generally have low population density—can be costly to serve. Results from our econometric model confirm the views of these stakeholders. We found that densely populated and more urbanized locations were more likely to receive broadband service than were less densely populated and rural locations. For example, we found that urban areas were 9 percentage points more likely to have broadband service available than were rural areas. Terrain was also frequently cited as a factor affecting broadband deployment decisions. In particular, we were told that infrastructure build- out can be difficult in mountainous and forested areas because these areas may be difficult to reach or difficult on which to deploy the required equipment. Conversely, we were told that flat terrain constitutes good geography for telecommunications deployment. For wireless providers, we were told that terrain concerns can present particular challenges because some wireless technologies require “line-of-sight,” meaning that radio signals transmitted from towers or antennas need an unobstructed pathway—with no mountains, trees, or buildings—from the transmission site to the reception devices at users’ premises. Terrain can also affect satellite broadband availability in rural areas that have rolling hills or many trees that can obstruct a satellite’s signal. Some stakeholders also said costs for what is known as “backhaul” are higher for rural areas and can affect the deployment of broadband networks in these areas. Backhaul refers to the transmission of information—or data—from any of a company’s aggregation points to an Internet backbone provider that will then transmit that data to any point on the Internet. This is also sometimes referred to as the “middle mile.” Internet traffic originating from rural areas may need to travel a long distance to a larger city to connect to a major Internet backbone provider. Because the cost of transmitting over this distance—that is, the backhaul— can be high, one stakeholder noted that backhaul costs are another barrier to deployment in rural areas. However, using our econometric model, we did not find that the distance to a metropolitan area with a population of 250,000 or more—our proxy for backhaul—was associated with a lower likelihood of broadband deployment. In Alas, backhaul from rl villge reqire the use of satellite. Thi type of backhaul i cotly ecause of the need for terretril infrastrctre to end nd receive ign from satellite as well as the need to either own or leassatellite trmitter. The high cot cffect whether provider deploy roband ervice in villge. To help defry thi cot, provider often look to erve n “nchor tennt” in villge, such as chool or helth clinic tht receive federl fnding. Based on our interviews with stakeholders, we found that certain demand factors influence providers’ deployment decisions. In particular, because stakeholders noted that potential providers seek to deploy in markets where demand for their service will be sufficient to yield substantial revenues, certain elements of markets were identified as affecting the demand for broadband: Ability to aggregate demand. For rural locations, officials we spoke with stressed the importance of aggregating sufficient demand. For example, officials in one state told us that to justify the cost of deployment in rural areas where population density is low, telecommunications providers need to be able to aggregate all of the possible demand to make a business case. We were also told that aggregation is furthered by ensuring that a large “anchor tenant” will subscribe to the service. Possible anchor-tenant customers described to us included large businesses, government agencies, health-care facilities, and schools. Because the revenues from such customers will be significant, two stakeholders noted that the anchor tenant alone will help to cover a significant portion of the providers’ expenses. Degree of competition. We found that the degree of existing broadband competition in a local market can inhibit or encourage deployment, depending on the circumstances. Some new entrants— companies not already providing a telecommunications service in an area—reported that they avoid entering markets with several existing providers and seek out markets where incumbent telephone and cable companies do not provide broadband service. The lack of existing service enables the entrant company to have the potential to capture many customers. At the same time, stakeholders told us that deployment by a new entrant often spurred incumbent telephone or cable providers to upgrade their infrastructures so as to compete with the entrant in the broadband market. Technological expertise. A few stakeholders noted that demand will be greater in areas where potential customers are familiar with computers and broadband, as these individuals are more likely to purchase broadband service. Stakeholders we spoke with rarely mentioned the per-capita income of a service area as a factor determining deployment. In fact, a few stakeholders credited cable franchising requirements with ensuring deployment to low- income areas; in some cases, cable franchise agreements require cable providers to build out to all parts of the service territory. However, a 2004 study did find that areas with higher median incomes were more likely to have competitive broadband systems. Similarly, results from our econometric analysis indicate that areas with higher per-capita income are more likely to receive broadband service than are areas with lower per- capita income. Using our econometric model, we did not find that taxation of Internet access by state governments influenced the deployment of broadband service. Taxes can raise consumer prices and reduce revenues and impose costs on providers, and thereby possibly reduce the incentive for companies to deliver a product or service. To assess the impact of Internet taxes on broadband deployment, we contacted officials in 48 states and the District of Columbia to determine whether the state, or local governments in the state, imposed taxes on Internet access. To incorporate this analysis into our model, we used a binary variable to indicate the presence of the tax; that is, each state was placed into one of two groups, states with a tax and states without a tax. As such, this binary variable could potentially capture the influence of other characteristics of the states, in addition to the influence of the tax. While the parameter estimate in our model had the expected sign—indicating that the imposition of taxes may reduce the likelihood of broadband deployment—it was not statistically significant. Many stakeholders we spoke with commented on issues related to technical characteristics of networks that provide broadband. In particular, many noted that certain technical characteristics of methods used to deliver broadband influence the extent of its availability. In terms of issues discussed for established modes of broadband delivery, we were told the following: DSL service can generally be provided over telephone companies’ copper plant to residences and businesses that are within approximately 3 miles from the telephone company’s facility, known as a central office. However, if the quality of the telephone line is not good, the distance limit can be reduced—that is, it may only be possible to provide DSL for locations within some lesser distance—perhaps 2 miles—from a central office. We were told, for example, that in some rural areas, DSL is more limited by distance because the telephone lines may be older. While the distance limit of DSL can be addressed by deploying certain additional equipment that extends fiber further into the network, this process can be expensive and time consuming. Across spectrum bands used to provide terrestrial wireless broadband service, technical characteristics vary: In some cases, signals may travel only a short distance, and in other cases, they may travel across an entire city; in some cases there may be a need for line-of-sight from the transmission tower to the user, but in other cases, the signals may be able to travel through walls and trees. Some stakeholders mentioned that wireless methods hold great promise for supporting broadband service. Satellite technology can provide a high-speed Internet service throughout most of the United States. However, the most economical package of satellite broadband service generally offers, at this time, upstream speeds of less than 200 kilobits per second, and therefore this service does not necessarily meet FCC’s definition of advanced telecommunications services, while it does meet FCC’s definition of high-speed service. Despite the near universal coverage of satellite service, consumers need a clear view of the southern sky to be able to receive transmissions from the satellites. Additionally, transmission via satellite introduces a slight delay, which causes certain applications, such as VoIP (i.e., telephone service over the Internet), and certain computer gaming to be ill-suited for use over satellite broadband. Some emerging or expanding broadband technologies do not currently have significant subscribership, but have the potential to be important means of broadband service in the coming years. These technologies include deep fiber deployment (e.g., fiber to the home), WiMAX, broadband over power lines (BPL), and third-generation (3G) cellular. Each of these technologies has technical considerations that are influencing the nature of deployment. See appendix IV for a discussion of these technologies. We found that government involvement in several venues, and access to resources at the local level, have affected the deployment of broadband networks throughout the nation. In particular, we found that (1) certain federal programs have provided funding for broadband networks; (2) some state programs have assisted deployment; (3) state and local government policies, as well as access to resources at the local level, can influence broadband deployment; and (4) broadband deployment—particularly in more rural settings—is often influenced by the extent of involvement and leadership exercised by local government and community officials. We found that several federal programs have provided significant financial assistance for broadband infrastructure. The Universal Service Fund (USF) has programs to support improved telecommunications services. The high-cost program of the USF provides eligible local telephone companies with funds to serve customers in remote or rural areas where the cost of providing service is higher than the cost of service in more urbanized areas. The high-cost funds are distributed to providers according to formulas based on several factors, such as the cost of providing service, with funds distributed to small rural incumbent local exchange carriers (ILEC) and larger ILECs serving rural areas based on different formulas. Competitive local exchange carriers can also qualify to receive high-cost funds. While high-cost funds are not specifically targeted to support the deployment of broadband infrastructure, these funds do support telecommunications infrastructure that is also used to provide broadband services. We were told by some stakeholders in certain states that high-cost support has been very important for the upgrade of telecommunications networks and the provision of broadband services. In particular, some stakeholders in Alaska, Ohio, and North Dakota told us that high-cost support has been critical to small telephone companies’ ability to upgrade networks and provide broadband services. Additionally, the e-rate program of the USF has provided billions of dollars in support of Internet connectivity for schools and libraries. Another USF program, the Rural Health Care Program, provides assistance for rural health facilities’ telecommunications services. Some programs of the U.S. Department of Agriculture’s Rural Utilities Service (RUS) provide grants to improve rural infrastructures providing broadband service. The Community Connect Program provides grants to deploy transmission infrastructures to provide broadband service in communities where no broadband services exist, and requires grantees to wire specific community facilities and provide free access to broadband services in those facilities for at least 2 years. Grants can be awarded to entities that want to serve a rural area of fewer than 20,000 residents. Approximately $9 million was appropriated in 2004 as well as in 2005 for this purpose. RUS’s Rural Broadband Access Loan and Loan Guarantee program provides loans to eligible applicants to deploy infrastructures that provide broadband service in rural communities that meet the program’s eligibility requirements. A wide variety of entities are eligible to obtain loans to serve small rural communities. To obtain a 4 percent loan, the applicant must plan on serving a community with no previously available broadband service, but loans at the Treasury interest rate do not have such a requirement. The Appalachian Regional Commission’s Information Age Appalachia program focuses on assisting in the development and use of telecommunications infrastructure. The program also provides funding to assist in education and training, e-commerce readiness, and technology-sector job creation. We were told that in Kentucky, funding from the commission assisted the development and operations of ConnectKentucky, a state alliance that focuses on broadband deployment and adoption. The Appalachian Regional Commission also provided some funding for projects in Ohio and Virginia. A number of states we visited have had programs to assist the deployment of broadband services, including the following: The Texas Telecommunications Infrastructure Fund began in 1996 and according to an official of the Texas Public Utility Commission committed to spend $1 billion on telecommunications infrastructure in Texas. Public libraries, schools, nonprofit medical facilities, and higher education institutions received grants for infrastructure and connectivity to advanced communications technology. The program is no longer operational. ConnectKentucky is an alliance of technology-focused businesses, government entities, and universities that work together to accelerate broadband deployment in the state. ConnectKentucky focuses on three goals: (1) raising public awareness of broadband services, (2) creating market-driven strategies to increase demand—particularly in rural areas, and (3) initiating policy to reduce regulatory barriers to broadband deployment. The Virginia Tobacco Indemnification and Community Revitalization Commission partially funded Virginia’s Regional Backbone Initiative. The backbone initiative is designed to stimulate economic development opportunities by encouraging the creation of new technology-based business and industry in southern Virginia, which has historically relied heavily on tobacco production. The ability of a company to access local rights-of-way, telephone and electric poles, and wireless-tower sites can influence the deployment of broadband service. In particular, a few stakeholders we spoke with said difficulty in gaining access to these resources can serve as a barrier to the rapid deployment of broadband service because accessing these resources was a time-consuming and expensive process. Companies often require access to rights-of-way—such as areas along public roads—in order to install infrastructure for broadband service. In some instances, companies can face challenges gaining access to rights-of-way, which can hinder broadband deployment. For example, we were told that in one California community, providers had difficulty bringing wires across a highway, which limited their ability to provide service in all areas of the community. Some companies also require access to telephone and electric poles to install their broadband infrastructure. Depending on the entity owning the pole, we were told that acquiring access to poles could be costly and time consuming. For example, one BPL provider told us that it encountered difficulty accessing poles owned by the telephone company. Finally, wireless companies need access to towers or sites on which they can install facilities for their broadband infrastructure. A few stakeholders we spoke with told us that gaining this access can be a difficult process. For example, one company said providers are often challenged by the need to learn each town’s tower-siting rules. While some stakeholders identified problems gaining access to these resources, other stakeholders did not identify access to rights-of-way, poles, and other resources as issues in deploying broadband services. We found that the video-franchising process can also influence the deployment of broadband service because companies may be building infrastructure to simultaneously provide both video and broadband services. To provide video service, such as cable television, companies usually must obtain a franchise agreement from a local government. Some stakeholders assert that the video-franchising process can delay the deployment of broadband service because providers must negotiate with a large number of local jurisdictions. Further, these negotiations can be time consuming and costly. As a result, these stakeholders believe that local franchising can hinder their ability to deploy broadband infrastructures. Alternatively, some stakeholders believe that the video-franchising process is important because it helps promote deployment of broadband service to all areas of a community. For example, some jurisdictions have ubiquity requirements mandating deployment to all areas of a community, including those that are less affluent. These stakeholders argue that without the local ubiquity requirement, service providers could “cherry pick” and exclusively provide broadband services to more economically desirable areas. In some instances, municipal governments provide broadband infrastructure and service. For example, we spoke with officials in five municipal governments that provide wire-based broadband service, often in conjunction with the government’s electric utility. We also spoke with one municipal government that provided wireless broadband service. A few of these municipal government officials told us that their municipality had undertaken this deployment because they believe that their communities either do not have, or do not have adequate, private broadband service. A significant number of stakeholders we interviewed support a municipality’s right to provide broadband services and believe that broadband service is a public utility, such as water and sewer. Some support municipal deployment of broadband, regardless of whether other providers are available in that area, while other stakeholders support a municipality’s right to deploy broadband service only if there are no other broadband providers serving the area. However, other stakeholders we spoke with oppose municipal government deployment of broadband service. These stakeholders believe that municipal governments are not prepared to be in the business of providing broadband and that municipal deployment can hinder private-sector deployment. We found that the cost of serving rural areas presents a challenge to the nationwide goal of universal access to broadband. One of the ways that some communities have addressed the lack of market entry into rural areas has been through initiatives wherein community leaders have worked to enhance the likely market success of private providers’ entry into rural broadband markets. For example, some community leaders have worked to aggregate demand—that is, to coordinate the Internet needs of various users so that a potential entrant would be able to support a business plan. We were told that this leadership—sometimes by key government officials, sometimes through partnerships—was seen as critical in helping to spur the market in some unserved areas. The following examples illustrate this point: In Massachusetts, several regional coalitions that have been called “connect” projects focus on demand aggregation as a tool to encourage further deployment of telecommunications backbone and broadband networks in more rural parts of the state that were not well served by broadband providers. In particular, three such regional groups said their demand aggregation model is designed to maximize the purchase of broadband services in their region by working with local hospitals, schools, home businesses, small business, and residents to demonstrate the full extent of the demand for broadband and thus encourage private investment in infrastructure. For the one project that was the most developed, a few stakeholders told us that the group had been critical in helping to spur infrastructure development in the area, and that leadership by State government was important to the development of the initiative. ConnectKentucky, as discussed earlier, is an example of a state coalition taking a leadership role to develop information on state deployment levels, educate citizens about the benefits of broadband service, and advocate broadband-friendly policies with the state legislature. Throughout our meetings in Kentucky, the work of ConnectKentucky was stated to have been instrumental in the development of a common understanding of the state of broadband deployment and adoption as well as in instigating new initiatives to advance the market. The key element of ConnectKentucky that was cited as crucial to its success was leadership from state government, in particular from the governor’s office. In Alaska, we found that in one remote area—Kotzebue, a community 26 miles above the Arctic Circle—strong local leadership was important to the development of a public-private partnership that provides improved medical care to the region. The local leadership from the health cooperative brought together parties in the community and worked with them to develop a plan to provide enhanced health service throughout the community’s villages. The Maniilaq Health Center uses a wireless “telecart” with a video camera that can send high-quality, real-time sound and video between the center and Anchorage. The center’s physicians are able to perform procedures under the guidance of experts in Anchorage who can “remotely” look over the physicians’ shoulders. In addition, there are village clinics staffed by trained village health aides. These village clinics are connected to the main health center via a broadband link that allows them to share records and diagnoses via the telecart. We developed an econometric model to assess the many factors that might influence whether a household purchases broadband service. The model examined two types of factors: the tax status of states in which respondents live, and the characteristics of households. We also discussed these issues, as well as the influence of characteristics and uses of broadband service, with stakeholders. Based on our model and interviews with stakeholders, we identified several characteristics of households that influence broadband adoption. First, our model indicated that high-income households are 39 percentage points more likely to purchase broadband service than are low-income households.Similarly, some stakeholders we spoke with stated that adoption of broadband service is more widespread in communities with high income levels. A key underlying factor may be that computer ownership is substantially higher among higher-income households, according to a survey conducted by the Census Bureau. Second, our model results showed that households with a college graduate are 12 percentage points more likely to subscribe to broadband services compared with households without a college graduate. In fact, when discussing the effects of education on the demand for broadband, we were told that some college graduates see broadband as a necessity and would be less likely to choose to live in a rural area that did not have adequate broadband facilities. Third, we found that households headed by young adults are more likely to purchase broadband than are households headed by a person 50 or older. Similarly, a few stakeholders we spoke with said that older adults are less likely to purchase broadband. This may be the case because older Americans generally have lower levels of computer ownership and computer familiarity. We also were told that households with children in school are more likely to have broadband service. Figure 4 provides some descriptive statistics to illustrate the relationship between several demographic characteristics and the adoption of broadband. We also examined whether households residing in rural areas were less likely to purchase broadband service than those living in urban areas. As noted earlier, we found that only 17 percent of rural households subscribe to broadband service. Our model indicated, however, that when the availability of broadband to households, as well as demographic characteristics, are taken into account, rural households no longer appear less likely than urban households to subscribe to broadband. That is, the difference in the subscribership to broadband among urban and rural households appears to be related to the difference in availability of the service across these areas, and not to a lower disposition of rural households to purchase the service. In addition to household characteristics, we also found that characteristics and uses of broadband service available to consumers can also influence the extent to which households purchase broadband service. Some stakeholders we spoke with mentioned that the price of broadband service is an important factor affecting a household’s decision to purchase this service. Some stakeholders mentioned, for example, that one of the key reasons for the recent surge in DSL subscribership is due to recent price declines for the service: Some providers are now offering DSL for less than $15 per month. Conversely, because satellite broadband service is expensive and also requires the upfront purchase of expensive equipment needed to receive the satellite signal, several of those we spoke with said that the expense of satellite broadband deters its purchase. In fact, a recent study suggests that areas served by multiple providers, where prices may tend to be lower, may have higher rates of broadband adoption. However, because we lacked data on the price of broadband service, we were unable to include this variable in our econometric model. We did not find that the number of companies providing broadband service affected the likelihood that a household would purchase broadband service. Some stakeholders also told us that the availability of applications and content not easily accessible through dial-up, as well as the degree to which consumers are aware of and value this availability, contribute to a household’s decision to adopt broadband. For example, some functions, applications, and content—such as gaming, VoIP, and music and video downloads—either need or function much more effectively with broadband service than with dial-up service and, as such, make broadband a major attraction for households that value these types of services and content. Alternatively, some applications, such as e-mail, function adequately with dial-up service, and for households that primarily use the Internet for e-mail, there may be little need to upgrade to broadband service. Several of those we spoke with noted that a “killer application”—one that nearly everyone would view as essential and might entice more American households to adopt broadband—has not yet emerged. We also examined whether the tax status of the state in which each survey respondent lived influenced their likelihood to adopt broadband service. As mentioned earlier, we used a binary variable to represent the presence of Internet taxation. As such, the variable may capture the influence of other characteristics of the states in which the households resided, in addition to the influence of the tax. Further, lacking a variable for the price of broadband service, we cannot assess how the imposition of the tax influenced the price of the service. Using our model, we found that the parameter estimate had the expected sign— indicating that the imposition of the tax may have reduced the likelihood that a household would purchase broadband service. While the estimate was not statistically significant at the 5 percent level, it was statistically significant at the 10 percent level, perhaps suggesting that it is a weakly significant factor. However, given the nature of our model, it is unclear whether this finding is related to the tax or other characteristics of the states in which households resided. Stakeholders we spoke with identified several options to facilitate greater broadband service in unserved areas; however, each option poses special challenges. RUS broadband programs provide a possible means for targeted assistance to unserved areas, but stakeholders raised concerns about the effectiveness of the loan program and its eligibility criteria. USF programs have indirectly facilitated broadband deployment in rural areas, but it is unclear whether the program should be expanded to directly support broadband service. Finally, wireless technologies could help overcome some of the cost and technological limitations to providing service in remote locations, but congestion and the management of the spectrum remain possible barriers. As mentioned earlier, RUS provides support through grants and loans to improve rural infrastructures providing broadband service. The Community Connect Broadband grant program provides funding for communities where no broadband service currently exists. One loan program, which provides loans at 4 percent, also requires that no existing broadband providers be present in a community, but loans at the Treasury interest rate are available to entities that plan to serve communities with existing broadband service. Several stakeholders with whom we spoke, as well as the findings of a recent report by the Inspector General (IG) of the Department of Agriculture, raised concerns about these programs: Effectiveness of loans. It is not clear whether a loan program—such as the RUS loan program—is effective for helping rural areas gain access to broadband services. RUS requires applicants to submit an economically viable business plan—that is, applicants must show that their business will be sufficiently successful such that the applicant will be capable of repaying the loan. But developing a viable broadband business plan can be difficult in rural areas, which have a limited number of potential subscribers. As a result, RUS has rejected many applications because the applicant could not show that the business plan demonstrated a commercially viable and sustainable business. In fact, the agency has been unable to spend all of its loan program funds. Since the inception of the program in 2002, the agency has fallen far short of obligating the available funding in this program. For example, RUS officials told us that in 2004, they estimated that the appropriations for the broadband loan program could support approximately $2.1 billion in loans, but only 28 percent of this amount—or $603 million— was awarded for broadband projects. RUS officials also told us that its 2005 appropriations could support just over $2 billion in loans, but only 5 percent—or $112 million—was awarded to broadband projects. One stakeholder we spoke with suggested that a greater portion of RUS funds should be shifted from loans to grants in order to provide a more significant level of assistance for rural broadband deployment. RUS officials noted that they are currently evaluating the program and recognize that the program criteria limit the ability of the agency to utilize their full loan funding. Competitive environment requirements. During our interviews, some stakeholders expressed concerns about how the presence of existing broadband deployment was considered in evaluating RUS grant and loan applications. In the case of the grant program, RUS approves applications only for communities that have no existing broadband service. Some local government officials and a company we spoke with noted that this “unserved” requirement for RUS grants can disqualify certain rural communities that have very limited Internet access— perhaps in only one small part of a community. Alternatively, regarding the Treasury rate loan program, a few providers and the IG’s report criticized the program for supporting the building of new infrastructure where infrastructure already existed. In particular, we learned that loans were being let for deployment in areas that already had at least one provider and in some cases had several providers. As such, it is not clear whether these funds are being provided to communities most in need. RUS officials noted, however, that the statute specifically allows such loans. Additionally, the issue of how the status of existing service is gauged was a concern for one provider we spoke with. RUS obtains information about existing providers from applicants, and agency officials told us that agency field representatives review the veracity of information provided by applicants during field visits. However, RUS officials told us that FCC zip-code data is not granular enough for their needs in evaluating the extent of broadband deployment in rural areas. Community eligibility. A few local officials we spoke with criticized the community size and income eligibility requirements for the grant and loan programs. In Massachusetts, one stakeholder said that most small towns in part of that state exceed RUS’s population requirements and thus do not qualify for grants or loans. The grant and loan programs also have per-capita personal income requirements. One service provider in Alaska said that the grant program income eligibility requirements can exclude Alaskan communities, while failing to take into account the high cost of living in rural Alaska. Technological neutrality. Satellite companies we spoke with said RUS’s broadband loan program requirements are not readily compatible with their business model or technology. Once a company launches a satellite, the equipment that individual consumers must purchase is the remaining infrastructure expense. Because the agency requires collateral for loans, the program is more suited for situations where the providers, rather than individual consumers, own the equipment being purchased through the loan. Yet, when consumers purchase satellite broadband, it is common for them to purchase the equipment needed to receive the satellite signal, such as the reception dish. Additionally, broadband service must be provided at a speed of at least 200 kilobits in both directions—which is not necessarily the case for satellite broadband—for it to qualify for RUS loans. Moreover, RUS officials noted that for satellite broadband providers to be able to access RUS loans, they would have to demonstrate that each customer lives in a community that meets the community size eligibility requirement. As such, this program may not be easily utilized by satellite broadband providers. Yet for some places, satellite could be a cost-effective mechanism to provide broadband infrastructure into rural areas. For example, in 2005, the RUS Community Connect program provided grants to 19 communities that average 554 residents and 194 households. The total cost of these grants was roughly $9 million. Thus, RUS spent an average of $2,443 per covered household, but the cost per household that adopted broadband would be even higher since only a subset of these households would choose to subscribe to broadband service. By contrast, two satellite providers we spoke with estimate that their consumer equipment and installation costs are roughly $600 per subscribing household. These figures might not fully represent the full nature of the services provided through the grant program and those available via satellite; for example, grantees of the RUS program are required to provide free Internet service to community centers. While the USF program does not directly fund broadband service, the funding provided to support telecommunications networks indirectly supports the development of infrastructure that can provide many communications services, including broadband. USF’s high-cost program helps maintain and upgrade telecommunications networks in rural areas. Three stakeholders we spoke with in Alaska, Ohio, and North Dakota attributed the relative success of broadband deployment in rural areas to the USF program. Additionally, the Schools and Libraries Program and the Rural Health Care Program help facilitate broadband service to specific locations; according to two providers in Alaska, these programs have been very beneficial in bringing some form of broadband service to rural Alaskan villages that might have received no service without these government programs. However, stakeholders we spoke with identified several concerns about the USF program: Large ILECs serving rural areas and rural ILECs receive high-cost fund support under different formulas. The two types of ILECs have different eligibility criteria under which they can qualify to receive high-cost support and more support is provided to rural companies than to nonrural companies serving rural areas. Two stakeholders we spoke with suggested that the eligibility criteria should be modified, such that the criteria better reflect the cost to provide service in particular areas, rather than the type of company providing the service. Alternatively, two stakeholders we spoke with favor the current eligibility criteria and funding mechanism. Two stakeholders we spoke with expressed concerns about a lack of coordination across USF funding sources, which could lead to inefficient use of funds and inadequate leveraging of funds. For example, in Alaska, two stakeholders noted that governments and providers receive “silos” of funding for schools, libraries, and rural health centers. Because the programs are narrowly defined, multiple entities might be the recipient of funding for broadband service, which could lead to multiple broadband connections in relatively small rural communities. One stakeholder noted that since each entity might use only a fraction of its available broadband capacity, there can be capacity for Internet traffic available for other uses or users, but funding recipients are sometimes not allowed to share this capacity, either with other entities or with residents in the community. Thus, communities may be unable to leverage the available funding for other uses. While two stakeholders we spoke with suggested expanding the USF program to include broadband service, we found little support for this overall. Some stakeholders we spoke with expressed concern about funding the USF program at current levels of support. These stakeholders fear that expanding the USF program to include broadband service, which would increase program expenditures and thus require additional funding, could undermine support for the entire USF program. As mentioned previously, certain wireless technologies hold the potential for supporting broadband service in difficult-to-serve rural areas. In less densely populated areas, installing wire-based facilities for cable modem and DSL service represents a significant cost factor. Therefore, certain wireless technologies may be a lower-cost way to serve rural areas than wireline technologies. While wireless technologies hold the promise of expanding the availability of broadband, some stakeholders we spoke with expressed concern about the degree of congestion in certain bands as well as the management of spectrum. For example, in some geographic areas, we heard that congestion in certain unlicensed spectrum bands makes providing wireless broadband Internet access more difficult, and a few stakeholders said that with more unlicensed spectrum, wireless providers could support greater broadband deployment. Additionally, wireless providers we spoke with also expressed concern about the management of spectrum, particularly the quality of certain bands and quantity of spectrum available for wireless broadband service. Two stakeholders mentioned that spectrum allocated to wireless broadband service is susceptible to having communications obstructed by interference from trees and buildings. In a 2005 report, we noted that experts agreed that the government should evaluate its allocation of spectrum between licensed and unlicensed uses. But we also noted that these experts failed to agree on whether FCC should dedicate more or less spectrum to unlicensed uses. In June 2006, FCC will conduct an auction of spectrum dedicated to advanced wireless services, which will make available 90 MHz of spectrum for wireless broadband services. FCC staff also noted that the commission has other efforts underway to increase available spectrum for wireless broadband services. In the past several years, the importance of broadband for Americans and for the American economy has been articulated by interested stakeholders, as well as by the President, Congress, and the last several FCC chairmen. Universal availability of broadband has been set forth as a policy goal for the near term—2007. And progress toward this goal has been substantial. The availability of broadband to residential consumers has grown from its nascent beginnings in the latter part of the 1990s to broad coverage throughout the country. In the last 10 years, providers in traditional communications industry segments—telephone and cable—have upgraded and redesigned miles of their networks in order to offer broadband services. The provision of broadband through various wireless means, as well as over the existing electricity infrastructure, have also been developed, and for many, if not most Americans, the burgeoning broadband marketplace is characterized by competitive choice in broadband access and creative and ever-expanding applications and content. Many would consider the rollout of broadband infrastructure as a success story of entrepreneurial initiative. But not all places or people have experienced the full benefits of this rapid rollout of broadband services. As with many other technologies, the costs of bringing broadband infrastructure to rural America can be high. For private providers who must weigh the costs and returns of their investments, the feasibility of serving the most rural parts of our country may not work within a reasonable business model. While there are federal support mechanisms for rural broadband, it is not clear how much impact these programs are having or whether their design suggests a broad consideration of the most effective means of addressing the problem. And one of the difficulties of assessing the gaps in deployment and where to target any federal support is that it is hard to know exactly where broadband infrastructure has not been deployed. FCC does collect data on the geographic extent of providers’ service, but these data are not structured in a way that accurately illustrates the extent of deployment to residential users. Without accurate, reliable data to aid in analysis of the existing deployment gaps, it will be difficult to develop policy responses toward gaps in broadband availability. This could hinder our country’s attainment of universally available broadband. And as the industry moves quickly to even higher bandwidth broadband technologies, we risk leaving some of the most rural places in America behind. In a draft of this report provided to FCC for review and comment, GAO recommended that FCC identify and evaluate strategies for improving the 477 data such that the data provide a more accurate depiction of residential broadband deployment throughout the country. In oral comments regarding this recommendation, FCC staff acknowledged that the 477 data have some limitations in detailing broadband deployment, but also noted that there had recently been a proceeding examining its broadband data collection efforts and that some changes to the data collection had been implemented. In that proceeding, the commission also determined that it would be costly and could impose large burdens on filers—particularly small entities—to require any more detailed filings on broadband deployment. Although FCC staff told us that analysis of potential costs had been conducted, exact estimates of these costs and burdens have not yet been determined. Moreover, many have expressed concern about ensuring that all Americans—especially those in rural areas—have access to broadband technologies. Policymakers concerned about full deployment of broadband throughout the country will have difficulty targeting any assistance to that end without accurate and reliable data on localized deployment. As such, we recommend that FCC develop information regarding the degree of cost and burden that would be associated with various options for improving the information available on broadband deployment and should provide that information to the Senate Committee on Commerce, Science, and Transportation and the House Energy and Commerce Committee in order to help them determine what actions, if any, are necessary to employ going forward. We provided a draft of this report to the Department of Agriculture, the Department of Commerce, and the Federal Communications Commission for their review and comment. The Department of Agriculture provided no comments. The Department of Commerce and FCC provided technical comments that we incorporated, as appropriate. FCC did not comment on the final recommendation contained in this report. We also provided a draft of this report to several associations representing industry trade groups and state and local government entities for their review and comment. Specifically, the following associations came to GAO headquarters to review the draft: Cellular Telecommunications and Internet Association (CTIA), National Association of Regulatory Utility Commissioners (NARUC), National Association of Telecommunications Officers and Advisors (NATOA), National Cable and Telecommunications Association (NCTA), National Telecommunications Cooperative Association (NTCA), Satellite Industry Association (SIA), US Internet Industry Association (USIIA), United States Telecom Association (USTA), and Wireless Internet Service Providers Association (WISPA). Officials from CTIA, NARUC, and NTCA did not provide comments. Officials from NATOA, NCTA, SIA, and USIIA provided technical comments that were incorporated, as appropriate. USTA and WISPA provided comments that are discussed in appendix V. We are sending copies of this report to the appropriate congressional committees and to the Secretary of Agriculture, the Secretary of Commerce, and the Chairman of the Federal Communications Commission. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or heckerj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and major contributors to this report are listed in appendix VI. The objectives of the report were to provide information on (1) the current status of broadband deployment and adoption, (2) the factors that influence the deployment of broadband networks, (3) the factors that influence the adoption of broadband service by households, and (4) the options that have been suggested to spur greater broadband deployment and adoption. To respond to the four objectives, we used a variety of approaches. To gather opinions for all four objectives, we employed a case-study approach. This approach allowed us to identify issues at the state and local level that would not be apparent in nationwide data. We selected eight states for our case studies: Alaska, California, Kentucky, Massachusetts, North Dakota, Ohio, Texas, and Virginia. We selected these states based on Census Bureau data on statewide income, urbanization, population density, and percentage of households using the Internet. We also considered whether each state taxed Internet access. We sought to include states in diverse categories of each of our selection criteria. In each state, we interviewed state and local officials, including local franchising authorities, state public utility regulators, representatives from state governor’s offices; associations; private cable and telephone providers; wireless Internet service providers; and municipal and cooperative telecommunications providers. We also spoke with a variety of individuals and organizations knowledgeable about broadband services. In particular, we spoke with industry providers, trade associations, and academic experts. We also spoke with representatives from the Federal Communications Commission (FCC), the National Telecommunications and Information Administration of the Department of Commerce, and the Rural Utilities Service of the Department of Agriculture. To assess the factors influencing the deployment and adoption of broadband, we used survey data from Knowledge Networks/SRI’s The Home Technology MonitorTM: Spring 2005 Ownership and Trend Report. Knowledge Networks/SRI is a survey research firm that conducted a survey on household ownership and use of consumer electronics and media. Knowledge Networks/SRI interviewed approximately 1,500 randomly sampled telephone households, asking questions about the household’s purchase of computers and Internet access. All percentage estimates from the Knowledge Networks/SRI survey have margins of error of plus or minus 7 percentage points or less, unless otherwise noted. See appendix II for a discussion of the steps we took to evaluate the reliability of Knowledge Networks/SRI’s data. Using the data from Knowledge Networks/SRI, we estimated two econometric models. One model examined the factors affecting broadband deployment. We also developed a model to examine the factors affecting a household’s adoption of broadband services. See appendix III for a more detailed explanation of, and results from, our deployment and adoption models. To assess the status of broadband deployment, we used FCC’s Form 477 data that identified companies providing broadband service by zip code. We used FCC’s data to identify the companies reporting to provide broadband service in the zip codes where respondents to Knowledge Networks/SRI’s survey resided. To assess the reliability of FCC’s Form 477 data, we reviewed documentation, interviewed knowledgeable officials, and performed electronic testing of the data elements used in our analyses. We made several adjustments to these data, such as excluding satellite companies and companies only providing service to businesses. See appendix III for more on our methodology concerning adjustment to FCC’s 477 data. With these adjustments to the data, we determined that they were sufficiently reliable for the purposes of this report. We conducted our work from April 2005 through February 2006 in accordance with generally accepted government auditing standards. To obtain information on the types of Internet access purchased, or adopted, by U.S. households, we purchased existing survey data from Knowledge Networks Statistical Research (Knowledge Networks/SRI). Their survey was completed with 1,501 of the estimated 3,127 eligible sampled households for a response rate of 48 percent. The survey was conducted between February 22 and April 15, 2005. The study procedures yielded a sample of members of telephone households in the continental United States using a national random-digit dialing method. Survey Sampling Inc. (SSI) provided the sample of telephone numbers, which included both listed and unlisted numbers and excluded blocks of telephone numbers determined to be nonworking or business-only. At least five calls were made to each telephone number in the sample to attempt to interview a responsible person in the household. Special attempts were made to contact refusals and convert them into interviews; refusals were sent a letter explaining the purpose of the study and an incentive. Data were obtained from telephone households and are weighted to the total number of households in the 2005 Current Population Survey adjusted for multiple phone lines. As with all sample surveys, this survey is subject to both sampling and nonsampling errors. The effect of sampling errors due to the selection of a sample from a larger population can be expressed as a confidence interval based on statistical theory. The effects of nonsampling errors, such as nonresponse and errors in measurement, may be of greater or lesser significance but cannot be quantified on the basis of available data. Sampling errors arise because of the use of a sample of individuals to draw conclusions about a much larger population. The study’s sample of telephone numbers is based on a probability selection procedure. As a result, the sample was only one of a large number of samples that might have been drawn from the total telephone exchanges from throughout the country. If a different sample had been taken, the results might have been different. To recognize the possibility that other samples might have yielded other results, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. We are 95 percent confident that when only sampling errors are considered each of the confidence intervals in this report will include the true values in the study population. All percentage estimates from the survey have margins of error of plus or minus 7 percentage points or less, unless otherwise noted. The 95 percent confidence interval for the estimate of the total number of U.S. households that subscribed to broadband service in 2005 is 28.5 million to 33.7 million households. In addition to the reported sampling errors, the practical difficulties of conducting any survey introduce other types of errors, commonly referred to as nonsampling errors. For example, questions may be misinterpreted, some types of people may be more likely to be excluded from the study, errors could be made in recording the questionnaire responses into the computer-assisted telephone interview software, and the respondents’ answers may differ from those who did not respond. Knowledge Networks/SRI has been fielding versions of this survey for over 20 years. In addition, to reduce measurement error, Knowledge Networks/SRI employs interviewer training, supervision, and monitoring, as well as computer- assisted interviewing to reduce error in following skip patterns. For this survey, the 48 percent response rate is a potential source of nonsampling error; we do not know if the respondents’ answers are different from the 52 percent who did not respond. Knowledge Networks/SRI took steps to maximize the response rate—the questionnaire was carefully designed and tested through deployments over many years, at least five telephone calls were made at varied time periods to try to contact each telephone number, the interview period extended over about 8 weeks, and attempts were made to contact refusals and convert them into interviews. Because we did not have information on those contacted who chose not to participate in the survey, we could not estimate the impact of the nonresponse on our results. Our findings will be biased to the extent that the people at the 52 percent of the telephone numbers that did not yield an interview have different experiences with Internet access than did the 48 percent of our sample who responded. However, distributions of selected household characteristics (including presence of children, race, and household income) for the sample and the U.S. Census estimate of households show a similar pattern. To assess the reliability of these survey data, we relied on a prior GAO report that made use of the Knowledge Networks/SRI 2004 survey for a similar purpose. In that prior assessment, we determined that the data were sufficiently reliable for our purposes. For this report we reviewed Knowledge Networks/SRI’s documentation of survey procedures for 2005 and compared them to the procedures used in their 2004 survey. We determined that their survey methodology was substantively unchanged. Additionally, we performed electronic testing of the 2005 survey data elements used in this report. We determined that the data were sufficiently reliable for the purposes of this report. This appendix describes our models of broadband deployment and adoption. Specifically, we discuss (1) the design of our models, (2) the data sources, (3) our methodology for assessing broadband deployment, and (4) the estimation methodology and results. A company will deploy broadband service in an area only if the company believes that such a deployment will be profitable. Similarly, a household will purchase, or adopt, broadband service only if the value, or utility, to members of the household exceeds the price the household must pay to receive the service. In this section, we explain the two models we developed to examine the factors that influence the deployment and adoption of broadband service. A company will deploy broadband service in an area only if the company believes that such a deployment will be profitable. Based on conversations with industry stakeholders, including companies deploying broadband service, we identified a number of factors that influence a company’s decision to deploy broadband service. In particular, the following factors may influence the decision to deploy broadband service: population density, terrain, backhaul costs, existing or potential competition, the technical expertise of the population, the income of the population, and regulatory policies (such as rights-of-way policies). We also reviewed relevant studies, and noted the same and additional factors that may influence the deployment of broadband service. Some of these factors, such as the population density and backhaul, will influence the cost of providing broadband service, while other factors, such as the income of the population, will influence the potential revenues that a company may hope to generate. Together, these revenue and cost factors will influence the potential profitability of providing broadband service, and ultimately the decision to deploy broadband service. To empirically test these hypotheses, we estimated the following econometric model; since all the variables identified above were not available, we were unable to include some of the variables—such as terrain—in our model. The decision to deploy broadband service is a function of the population in the area; the population density in the area; the percentage of the population residing in an urban area; the per-capita income in the area; the educational attainment of the population in the area; the population teleworking in the area; the age of the population in the area; the distance to a metropolitan area with a population of 250,000 or whether the state in which the area is located imposed a tax on Internet access in 2005. Households will purchase, or adopt, broadband service only if the value, or utility, that members of the household receive from the service exceeds the price of the service. In conversations with industry stakeholders, we were told that several characteristics of households influence the extent to which households purchase broadband service; we also reviewed other studies, and noted characteristics of households that these studies associated with the purchase of broadband service. In particular, the following characteristics of households may influence the decision to purchase broadband service: income, education, age of household members, presence of children in the household, and the technological knowledge of members of the household. These characteristics may be associated with the extent to which a household would benefit from, and therefore value, broadband service, such as using broadband to telework, conduct research for school, and playing games. Industry stakeholders also noted that price influences a household’s decision to purchase broadband service. To empirically test these hypotheses, we estimated the following econometric model; because we lacked data on the price of broadband service, we were unable to include this variable in our econometric model. The decision to purchase, or adopt, broadband service is a function of the income of the household; the education attainment of the heads of the household; the age of the heads of the household; the presence of children in the household; the racial composition of the household; the occupation of the heads of the household; the number of people in the household; whether the household resides in an urban, suburban, or rural location; the number of companies providing broadband service in the area; and whether the state in which the household resides imposes a tax on Internet access. We required several data elements to build the data set used to estimate our deployment and adoption models. The following is a list of our primary data sources. In addition, we list all of the variables, definitions, and sources for the deployment model in table 1 and the adoption model in table 2. We obtained data on a sample of households in the United States from Knowledge Networks/SRI, using Knowledge Networks/SRI’s product The Home Technology MonitorTM: Spring 2005 Ownership and Trend Report. From February through April 2005, Knowledge Networks/SRI interviewed a random sample of 1,501 households in the United States. Knowledge Networks/SRI asked participating households a variety of questions about their use of technology, including questions such as whether the household purchased broadband service, and about the household’s demographic characteristics. From the Federal Communications Commission (FCC), we obtained information on the companies providing broadband service in zip codes throughout the United States in December 2004. For each zip code, FCC provided the names of companies reporting, through the agency’s Form 477, that they provided broadband service to at least one residential or small business customer and the type of company providing the service (e.g., cable and satellite). We used the most recent information from the U.S. Census Bureau to obtain demographic information for the areas where the households responding to Knowledge Networks/SRI’s survey were located. FCC’s Form 477 data include information on companies providing broadband service to at least one residential or business customer in zip codes throughout the United States in December 2004. However, since zip codes can represent large geographic areas, companies providing broadband service in a zip code might not have facilities in place to serve all households in the zip code. Thus, while a household might reside in a zip code in which FCC’s Form 477 indicates that broadband service is available, that service might not be available to the household. Additionally, as we note in the text, we identified other concerns with FCC’s data. Therefore, we took additional steps to assess whether broadband service was available to households included in Knowledge Networks/SRI’s survey. In particular, we took the following steps for each observation in our data set: removed firms providing only satellite service; removed firms that provided only broadband service to business customers, since residential households were the focus of our study; removed large incumbent local exchange carriers when the company was identified as providing service in areas that lay outside of its local exchange area, since these firms typically provide service only to business customers outside of their local exchange areas; removed firms when 2 or more of the 10 largest cable operators reported providing broadband service, since large cable operators rarely have overlapping service territories; removed cable operators if the responding household indicated that cable service did not pass the residence; and removed companies providing telephone-based broadband service if the household’s residence was greater than 2.5 miles from the central office facility, since DSL service is distance limited. For both the deployment model and adoption model, we are estimating a reduced-form, binary-choice model. That is, broadband service is either deployed in the area or it is not, and the household either purchases broadband service or it does not. Given the binary choice nature of the models, we employed the probit method to estimate the deployment and adoption equations. In this section, we present descriptive statistics and estimation results for the two equations and discuss the results. In table 3, we provide basic statistical information on all of the variables included in the deployment model, and in table 4, we provide the results from the probit estimation of the deployment model. Of the 1,501 respondents to Knowledge Networks/SRI’s survey, we used 1,402 observations in the deployment model; we were unable to match the zip+4 code for all 1,501 observations with publicly available data, which was necessary to assess whether the residence was 2.5 miles from the serving central office facility. Results from our model indicate that several factors related to the cost of providing broadband service and the demand for broadband service influence the likelihood that service will be available in a particular area. Regarding the cost factors, we found that urban areas and areas with greater population density are more likely to receive broadband service. For example, urban areas are about 9 percentage points more likely to receive broadband service than are similar rural areas. These results are consistent with broadband service being less costly to deploy in densely populated, more urban environments, where a similar investment in facilities can serve a greater number of subscribers than is possible in rural areas. Regarding demand for broadband service, we found that areas with greater per-capita incomes are more likely to receive broadband service. Additionally, we found that areas with a greater number of people working from home are less likely to have broadband service and that areas with a greater percentage of people age 65 or older are more likely to have broadband service. We did not find that taxation of Internet access by state governments influenced the deployment of broadband service. Taxes can raise consumer prices and reduce revenues and impose costs on providers, and thereby possibly reduce the incentive for companies to deliver a product or service. Since we used a binary variable to indicate the presence of taxes, this variable could also potentially capture the influence of other characteristics of the states, in addition to the influence of the tax. Results from our model indicate that Internet access taxes do not affect the likelihood that companies will deploy broadband service; while the parameter estimate has the expected sign, the estimate is not statistically significant. In table 5, we provide basic statistical information on all of the variables included in the adoption model, and in table 6, we provide the results from the probit estimation of the adoption model. Since households can only chose to purchase, or adopt, broadband service where it is deployed, we only include households from Knowledge Networks/SRI’s survey where we assessed that broadband service was available; based on our analysis, 133 respondents did not have broadband service available. Further, 355 respondents to Knowledge Networks/SRI’s survey did not answer one or more demographic questions and 29 did not answer, or did not know, what type of Internet connection their household purchased. Therefore, we excluded these respondents. Thus, we used 901 observations in the adoption model. Our model results indicate that four characteristics influence whether households purchase, or adopt, broadband service. First, we found that households with greater incomes are more likely to purchase broadband service than are lower-income households. For example, the 25 percent of households with the highest income levels were about 39 percentage points more likely to purchase broadband service than the 25 percent of households with the lowest income levels. Second, households with a college graduate are about 12 percentage points more likely to purchase broadband service than are households without a college graduate. We also found that white households are more likely to purchase broadband service than households of other races. Finally, older households are less likely to purchase broadband service than are younger households. As with the deployment model, we did not find that taxation of Internet access by state governments influenced the adoption of broadband service. As mentioned earlier, we used a binary variable to represent the presence of Internet taxation. As such, the variable may capture the influence of other characteristics of the states in which the households resided, in addition to the influence of the tax. Further, lacking a variable for the price of broadband service, we cannot assess how the imposition of the tax influenced the price of the service and thus the household’s adoption decision. Using our model, we found that the parameter estimate had the expected sign—indicating that the imposition of the tax may have reduced the likelihood that a household would purchase broadband service. While the estimate was not statistically significant at the 5 percent level, it was statistically significant at the 10 percent level, perhaps suggesting that it is a weakly significant factor. However, given the nature of our model, it is unclear whether this finding is related to the tax or other characteristics of the states in which households resided. Based on our conversations with stakeholders, and our own research, we identified several emerging technologies that could further the deployment of broadband service. Broadband over power lines. Broadband over power lines (BPL) is an emerging competitive source of broadband to the home. BPL transmits broadband by using existing electric distribution networks, such as the wires that deliver electricity to consumers. Although there are a few commercial deployments, most BPL efforts are currently at the trial stage. Trials and commercial deployments range across the urban-rural landscape, from Cullman County, Alabama, to Cincinnati. Currently, BPL can provide upstream and downstream speeds of 3 million bits per second (Mbps), and next generation equipment is being developed to provide speeds of 100 Mbps. Industry stakeholders have identified several concerns with BPL service. First, while traveling across the electric network, BPL can emit signals that interfere with other users of the spectrum, such as amateur radio and public safety. The Federal Communications Commission (FCC) has taken steps to document, mitigate, and alleviate this potential problem. Second, some stakeholders also expressed concern that, due to the age or condition of the electric network, providers in some areas would be unable to transmit Internet data at high speeds. Finally, some stakeholders expressed varied opinions about the feasibility of BPL to bring broadband service to rural areas. Some stakeholders were optimistic about BPL’s ability to serve these communities, while others expressed skepticism, pointing out that overcoming BPL’s distance limitations would require more equipment and additional costs. Wireless fidelity (Wi-Fi). Wi-Fi-enabled wireless devices, such as laptop computers, can send and receive data from any location within signal reach—about 300 feet—of a Wi-Fi-equipped access point. Wi-Fi provides data transmission rates, based on the current transmission standard, of up to a maximum of 54 Mbps, which is shared by multiple users. Wi-Fi equipment and services are based on the 802.11 series standards developed by the Institute of Electrical and Electronics Engineers (IEEE) and operate on an unlicensed basis in the 2.4 and 5 GHz spectrum bands. Several stakeholders we spoke with said that Wi-Fi service complemented, rather than substituted for, other broadband services. The number of areas that can access Wi-Fi service, known as “hot spots,” has grown dramatically and, according to one equipment manufacturer, may exceed 37,000. Wi-Fi hot spots include such diverse entities as airports, colleges, retail establishments, and even entire towns. Increasingly, municipalities are planning or deploying larger area or citywide hot spots; some municipalities considering or deploying a Wi-Fi network include Atlanta, Philadelphia, San Francisco, and Tempe, Arizona. While Wi-Fi service is widely deployed in urban and suburban areas, some stakeholders identified a few problems with the service. Because Wi-Fi hot spots operate in unlicensed spectrum, interference can be a problem. Several stakeholders we spoke with mentioned congestion or limited distance capability in Wi-Fi as a potential limitation of the service. Worldwide Interoperability for Microwave Access (WiMAX). With WiMAX service, the distance covered and data transmission speeds can exceed those found with Wi-Fi service. WiMAX can provide data transmission speeds of 75 Mbps with non-line-of-sight service—that is, the signal can pass through buildings, trees, or other obstructions—or up to 155 Mbps with line-of-sight service. In a non-line-of-sight environment, WiMAX can provide service in an area with a radius of 3 miles or more; in a line-of-sight environment, WiMAX can provide service up to approximately 30 miles. WiMAX equipment and services are based on the IEEE 802.16 series of standards and operate in unlicensed and licensed spectrum. WiMAX networks are being deployed on a trial commercial basis, but some challenges remain for further deployment. More than 150 pilot and commercial deployments of WiMAX networks are currently in use. Because of its greater capabilities in terms of distance and speed, WiMAX can extend wireless broadband to less densely populated communities, where wired solutions may be more expensive to deploy. Stakeholders we spoke with serving smaller, less densely populated areas indicated that they were testing or interested in WiMAX to serve their communities. However, concerns have been raised about spectrum availability, interference, and the ability of different manufacturers’ equipment to support the same level of broadband applications. FCC has several initiatives under way to increase the availability of spectrum for WiMAX services. While the WiMAX Forum Certification Lab certifies WiMAX equipment, the standard allows manufacturers of equipment various options, such as different levels of security protocols, and thus, not all equipment may support the same level of service, such as carrying voice over the Internet (VoIP) and security. Third generation (3G) cellular broadband. Recently, several major commercial wireless companies have introduced broadband service based on advances in cellular technology and data protocols. Focused primarily on the business customer and more expensive than cable modem and DSL services, 3G services permit consumers to receive broadband service while mobile. 3G services typically provide data transmission speeds of 400 to 700 kilobits per second (Kbps). There are two competing technologies: EV- DO service, introduced by Verizon and Sprint; and HSDPA, introduced by Cingular. Currently, Verizon Wireless reports that its service is available nationally in 181 major metropolitan markets, covering approximately 150 million people. Sprint reports providing EV-DO service in major airports and business districts in 212 markets, covering approximately 140 million people. For HSDPA service, Cingular reports that its service is available to nearly 35 million people in 52 communities. Industry stakeholders expressed concerns about the ubiquity of service, data transmission speeds, and the monthly costs associated with 3G service. Opinions varied as to whether cellular broadband services would be a competitive threat, or a complementary service, for consumers of other broadband services. Fiber to the home (FTTH). FTTH provides a high-speed, wire-based alternative to traditional cable and telephone networks. According to the FTTH Council, as of September 2005, 2.7 million homes were passed by fiber and over 300,000 homes were connected to fiber in 652 communities in 46 states. Stakeholders expressed concerns about the high cost associated with deploying FTTH, and also that FTTH deployment was concentrated in urban and suburban communities, or in newly developed communities (known as “greenfields”). We provided a draft of this report to several associations representing industry trade groups and state and local government entities for their review and comment. The following associations came to GAO headquarters to review the draft: Cellular Telecommunications and Internet Association (CTIA), National Association of Regulatory Utility Commissioners (NARUC), National Association of Telecommunications Officers and Advisors (NATOA), National Cable and Telecommunications Association (NCTA), National Telecommunications Cooperative Association (NTCA), Satellite Industry Association (SIA), US Internet Industry Association (USIIA), United States Telecom Association (USTA), and Wireless Internet Service Providers Association (WISPA). Officials from CTIA, NARUC, and NTCA did not provide comments. Officials from NATOA, NCTA, SIA, and USIIA provided technical comments that were incorporated, as appropriate. USTA officials noted that our discussion of the effects of local franchising on deployment imply that franchise agreements have helped to ensure broad deployment of broadband, but that, in the view of USTA, franchise buildout requirements can deter entry and thus reduce deployment. WISPA officials expressed concern about our findings regarding the taxation of Internet access and noted that it is important, in their view, that wireless Internet access provided by small providers not be taxed, and in fact, WISPA officials noted that small providers should be provided a tax incentive to encourage investment and expansion in underserved areas. Additionally, these officials expressed concern about the presentation of data on how households currently access the Internet from their homes. WISPA stated that these data understate the importance that wireless access will have toward the goal of universal broadband coverage both within and outside of users’ homes. WISPA stated that the report accurately depicts that wireless Internet service providers (WISP) currently hold a minority market share, and WISPA officials note that without certain government policies to foster growth in the wireless industry, WISPs will be at a competitive disadvantage. WISPA officials also expressed concern that the report understates factors that are hindering the growth of the wireless Internet industry—most notably, the need for additional spectrum under 1 Ghz, such as the TV white spaces. Further WISPA noted that the data showing broadband penetration rates in urban, rural, and suburban areas should not be interpreted as indicating that access to broadband is lower in only rural areas. They suggested that differences in broadband penetration rates across these types of locations are not that great and that pockets of areas with no access exist in many areas. As such, WISPA suggests that policy response regarding spectrum availability, USF funding, and Rural Utilities Service be focused on engaging smaller providers that can bring broadband to areas not currently served by the larger incumbent providers. Individuals making key contributions to this report include Amy Abramowitz (Assistant Director), Eli Albagli, Stephen Brown, Michael Clements, Sandra DePaulis, Nina Horowitz, Eric Hudson, Bert Japikse, John Mingus, Sara Ann Moessbauer, Karen O’Conor, Lindsay Welter, and Duffy Winters.
Both Congress and the President have indicated that access to broadband for all Americans is critically important. Broadband is seen as a critical economic engine, a vehicle for enhanced learning and medicine, and a central component of 21st century news and entertainment. As part of our response to a mandate included in the Internet Tax Nondiscrimination Act of 2004, this report examines the factors that affect the deployment and the adoption of broadband services. In particular, this report provides information on (1) the current status of broadband deployment and adoption; (2) the factors that influence the deployment of broadband networks; (3) the factors that influence the adoption, or purchase, of broadband service by households; and (4) the options that have been suggested to spur greater broadband deployment and adoption. About 30 million American households have adopted broadband service, but the Federal Communications Commission's (FCC) data indicating the availability of broadband networks has some weaknesses. FCC conducts an extensive data collection effort using its Form 477 to assess the status of advanced telecommunications service in the United States. For its zip-code level data, FCC collects data based on where subscribers are served, not where providers have deployed broadband infrastructure. Although it is clear that the deployment of broadband networks is extensive, the data may not provide a highly accurate depiction of local deployment of broadband infrastructures for residential service, especially in rural areas. A variety of market and technical factors, government efforts, and access to resources at the local level have influenced the deployment of broadband infrastructure. Areas with low population density and rugged terrain, as well as areas removed from cities, are generally more costly to serve than are densely populated areas and areas with flat terrain. As such, deployment tends to be less developed in more rural parts of the country. Technical factors can also affect deployment. GAO also found that a variety of federal and state efforts, and access to resources at the local level, have influenced the deployment of broadband infrastructure. A variety of characteristics related to households and services influence whether consumers adopt broadband service. GAO found that consumers with high incomes and college degrees are significantly more likely to adopt broadband. The price of broadband service remains a barrier to adoption for some consumers, although prices have been declining recently. The availability of applications and services that function much more effectively with broadband, such as computer gaming and file sharing, also influences whether consumers purchase broadband service. Stakeholders identified several options to address the lack of broadband in certain areas. Although the deployment of broadband is widespread, some areas are not served, and it can be costly to serve highly rural areas. Targeted assistance might help facilitate broadband deployment in these areas. GAO found that stakeholders have some concerns about the structure of the Rural Utilities Service's broadband loan program. GAO was also told that modifications to spectrum management might address the lack of broadband infrastructure in rural areas. Also, because the cost of building land-based infrastructure is so high in some rural areas, satellite industry stakeholders noted that satellite broadband technology may be the best for addressing a lack of broadband in those regions. While several options such as these were suggested to GAO, each has some challenges to implementation. Also, a key difficulty for analyzing and targeting federal aid for broadband is a lack of reliable data on the deployment of networks.
STEM includes many fields of study and occupations. Based on the National Science Foundations’ categorization of STEM fields, we developed STEM fields of study from NCES’s National Postsecondary Student Aid Study (NPSAS) and Integrated Postsecondary Education Data System (IPEDS), and identified occupations from BLS’s Current Population Survey (CPS). Using these data sources, we developed nine STEM fields for students, eight STEM fields for graduates, and four broad STEM fields for occupations. Table 2 lists these STEM fields and occupations and examples of subfields. Additional information on STEM occupations is provided in appendix I. Many of the STEM fields require completion of advanced courses in mathematics or science, subjects that are introduced and developed at the kindergarten through 12th grade level, and the federal government has taken steps to help improve achievement in these and other subjects. Enacted in 2002, the No Child Left Behind Act (NCLBA) seeks to improve the academic achievement of all of the nation’s school-aged children. NCLBA requires that states develop and implement academic content and achievement standards in mathematics, science and the reading or language arts. All students are required to participate in statewide assessments during their elementary and secondary school years. Improving teacher quality is another goal of NCLBA as a strategy to raise student academic achievement. Specifically, all teachers teaching core academic subjects must be highly qualified by the end of the 2005-2006 school year. NCLBA generally defines highly qualified teachers as those that have (1) a bachelor’s degree, (2) state certification, and (3) subject area knowledge for each academic subject they teach. The federal government also plays a role in coordinating federal science and technology issues. The National Science and Technology Council (NSTC) was established in 1993 and is the principal means for the Administration to coordinate science and technology among the diverse parts of the federal research and development areas. One objective of NSTC is to establish clear national goals for federal science and technology investments in areas ranging from information technologies and health research to improving transportation systems and strengthening fundamental research. NSTC is responsible for preparing research and development strategies that are coordinated across federal agencies in order to accomplish these multiple national goals. In addition, the federal government, universities and colleges, and others have developed programs to provide opportunities for all students to pursue STEM education and occupations. Additional steps have been taken to increase the numbers of women, minorities, and students with disadvantaged backgrounds in the STEM fields, such as providing additional academic and research opportunities. According to the 2000 Census, 52 percent of the total U.S. population 18 and over were women; in 2003, members of racial or ethnic groups constituted from 0.5 percent to 12.6 percent of the civilian labor force (CLF), as shown in table 3. In addition to domestic students, international students have pursued STEM degrees and worked in STEM occupations in the United States. To do so, international students and scholars must obtain visas. International students who wish to study in the United States must first apply to a Student and Exchange Visitor Information System (SEVIS) certified school. In order to enroll students from other nations, U.S. colleges and universities must be certified by the Student and Exchange Visitor Program within the Department of Homeland Security’s Immigration and Customs Enforcement organization. As of February 2004, nearly 9,000 technical schools and colleges and universities had been certified. SEVIS, is an Internet-based system that maintains data on international students and exchange visitors before and during their stay in the United States. Upon admitting a student, the school enters the student’s name and other information into the SEVIS database. At this time the student may apply for a student visa. In some cases, a Security Advisory Opinion (SAO) from the Department of State (State) may be needed to determine whether or not to issue a visa to the student. SAOs are required for a number of reasons, including concerns that a visa applicant may engage in the illegal transfer of sensitive technology. An SAO based on technology transfer concerns is known as Visas Mantis and, according to State officials, is the most common type of SAO applied to science applicants. In April 2004, the Congressional Research Service reported that State maintains a technology alert list that includes 16 sensitive areas of study. The list was produced in an effort to help the United States prevent the illegal transfer of controlled technology and includes chemical and biotechnology engineering, missile technology, nuclear technology, robotics, and advanced computer technology. Many foreign workers enter the United States annually through the H-1B visa program, which assists U.S. employers in temporarily filling specialty occupations. Employed workers may stay in the United States on an H-1B visa for up to 6 years. The current cap on the number of H-1B visas that can be granted is 65,000. The law exempts certain workers, however, from this cap, including those who are employed or have accepted employment in specified positions. Moreover, up to 20,000 exemptions are allowed for those holding a master’s degree or higher. Officials from 13 federal civilian agencies reported having 207 education programs funded in fiscal year 2004 that were specifically established to increase the numbers of students and graduates pursuing STEM degrees and occupations, or improve educational programs in STEM fields, but they reported little about the effectiveness of these programs. These 13 federal agencies reported spending about $2.8 billion for their STEM education programs. Taken together, NIH and NSF sponsored nearly half of the programs and spent about 71 percent of the funds. In addition, agencies reported that most of the programs had multiple goals, and many were targeted to multiple groups. Although evaluations have been done or were under way for about half of the programs, little is known about the extent to which most STEM programs are achieving their desired results. Coordination among the federal STEM education programs has been limited. However, in 2003, the National Science and Technology Council formed a subcommittee to address STEM education and workforce policy issues across federal agencies. Officials from 13 federal civilian agencies provided information on 207 STEM education programs funded in fiscal year 2004. The numbers of programs ranged from 51 to 1 per agency with two agencies, NIH and NSF, sponsoring nearly half of the programs—99 of 207. Table 4 provides a summary of the numbers of programs by agency, and appendix II contains a list of the 207 STEM education programs and funding levels for fiscal year 2004 by agency. Federal civilian agencies reported that approximately $2.8 billion was spent on STEM education programs in fiscal year 2004. The funding levels for STEM education programs among the agencies ranged from about $998 million to about $4.7 million. NIH and NSF accounted for about 71 percent of the total—about $2 billion of the approximate $2.8 billion. NIH spent about $998 million in fiscal year 2004, about 3.6 percent of its $28 billion appropriation, and NSF spent about $997 million, which represented 18 percent of its appropriation. Four other agencies, some with a few programs, spent about 23 percent of the total: $636 million. For example, the National Aeronautics and Space Administration (NASA) spent about $231 million on 5 programs and the Department of Education (Education) spent about $221 million on 4 programs during fiscal year 2004. Figure 1 shows the 6 federal civilian agencies that used the most funds for STEM education programs and the funds used by the remaining 7 agencies. The funding reported for individual STEM education programs varied significantly, and many of the programs have been funded for more than 10 years. The funding ranged from $4,000 for an USDA-sponsored program that offered scholarships to U.S. citizens seeking bachelor’s degrees at Hispanic-serving institutions, to about $547 million for a NIH grant program that is designed to develop and enhance research training opportunities for individuals in biomedical, behavioral, and clinical research by supporting training programs at institutions of higher education. As shown in table 5, most programs were funded at $5 million or less and 13 programs were funded at more than $50 million in fiscal year 2004. About half of the STEM education programs were first funded after 1998. The oldest program began in 1936, and 72 programs are over 10 years old. Appendix III describes the STEM education programs that received funding of $10 million or more during fiscal year 2004 or 2005. Agencies reported that most of the STEM education programs had multiple goals. Survey respondents reported that 80 percent (165 of 207) of the education programs had multiple goals, with about half of these identifying four or more goals for individual programs. Moreover, according to the survey responders, few programs had a single goal. For example, 2 programs were identified as having one goal of attracting and preparing students at any education level to pursue coursework in the STEM areas, while 112 programs identified this as one of multiple goals. Table 6 shows the program goals and numbers of STEM programs aligned with them. The STEM education programs provided financial assistance to students, educators, and institutions. According to the survey responses, 131 programs provided financial support for students or scholars, and 84 programs provided assistance for teacher and faculty development. Many of the programs provided financial assistance to multiple beneficiaries, as shown in table 7. Most of the programs were not targeted to a specific group but aimed to serve a wide range of students, educators, and institutions. Of the 207 programs, 54 were targeted to 1 group and 151 had multiple target groups. In addition, many programs were targeted to the same group. For example, while 12 programs were aimed solely at graduate students, 88 other programs had graduate students as one of multiple target groups. Fewer programs were targeted to elementary and secondary teachers and kindergarten through 12th grade students than to other target groups. Table 8 summarizes the numbers of STEM programs targeted to one group and multiple groups. Some programs limited participation to certain groups. According to survey respondents, U.S. citizenship was required to be eligible for 53 programs, and an additional 75 programs were open only to U.S. citizens or permanent residents. About one-fourth of the programs had no citizenship requirement, and 24 programs allowed noncitizens or permanent residents to participate in some cases. According to a NSF official, students receiving scholarships or fellowships through NSF programs must be U.S. citizens or permanent residents. In commenting on a draft of this report, NSF reported that these restrictions are considered to be an effective strategy to support its goal of creating a diverse, competitive, and globally-engaged U.S. workforce of scientists, engineers, technologists, and well-prepared citizens. Officials at two universities said that some research programs are not open to non-citizens. Such restrictions may reflect concerns about access to sensitive areas. In addition to these restrictions, some programs are designed to increase minority representation in STEM fields. For example, NSF sponsors a program called Opportunities for Enhancing Diversity in the Geosciences to increase participation by African Americans, Hispanic Americans, Native Americans (American Indians and Alaskan Natives), Native Pacific Islanders (Polynesians or Micronesians), and persons with disabilities. Evaluations had been completed or were under way for about half of the STEM education programs. Agency officials responded that evaluations were completed for 55 of the 207 programs and that for 49 programs, evaluations were under way at the time we conducted our survey. Agency officials provided us documentation for evaluations of 43 programs, and most of the completed evaluations reviewed reported that the programs met their objectives or goals. For example, a March 2004 report on the outcomes and impacts of NSF’s Minority Postdoctoral Research Fellowships program concluded that there was strong qualitative and quantitative evidence that this program is meeting its broad goal of preparing scientists from those ethnic groups that are significantly underrepresented in tenured U.S. science and engineering professorships and for positions of leadership in industry and government. However, evaluations had not been done for 103 programs, some of which have been operating for many years. Of these, it may have been too soon to expect evaluations for about 32 programs that were initially funded in fiscal year 2002 or later. However, of the remaining 71 programs, 17 have been operating for over 15 years and have not been evaluated. In commenting on a draft of this report NSF noted that all of its programs undergo evaluation and that it uses a variety of mechanisms for program evaluation. We reported in 2003 that several agencies used various strategies to develop and improve evaluations. Evaluations play an important role in improving program operations and ensuring an efficient use of federal resources. Although some of the STEM education programs are small in terms of their funding levels, evaluations can be designed to consider the size of the program and the costs associated with measuring outcomes and collecting data. Coordination of federal STEM education programs has been limited. In January 2003 the National Science and Technology Council (NSTC), Committee on Science (COS), established a subcommittee on education and workforce development. The purpose of the subcommittee is to advise and assist COS and NSTC on policies, procedures, and programs relating to STEM education and workforce development. According to its charter, the subcommittee will address education and workforce policy issues and research and development efforts that focus on STEM education issues at all levels, as well as current and projected STEM workforce needs, trends, and issues. The members include representatives from 20 agencies and offices—the 13 agencies that responded to our survey as well as the Departments of Defense, State, and Justice, and the Office of Science and Technology Policy, the Office of Management and Budget, the Domestic Policy Council, and the National Economic Council. The subcommittee has working groups on (1) human capacity in STEM areas, (2) minority programs, (3) effective practices for assessing federal efforts, and (4) issues affecting graduate and postdoctoral researchers. The Human Capacity in STEM working group is focused on three strategic initiatives: defining and assessing national STEM needs, including programs and research projects; identifying and analyzing the available data regarding the STEM workforce; and creating and implementing a comprehensive national response that enhances STEM workforce development. NSTC reported that as of June 2005 the subcommittee had a number of accomplishments and projects under way that related to attracting students to STEM fields. For example, it has (1) surveyed federal agency education programs designed to increase the participation of women and underrepresented minorities in STEM studies; (2) inventoried federal fellowship programs for graduate students and postdoctoral fellows; and (3) coordinated the Excellence in Science, Technology, Engineering, and Mathematics Education Week activities, which provide an opportunity for the nation’s schools to focus on improving mathematics and science education. In addition, the subcommittee is developing a Web site for federal educational resources in STEM fields and a set of principles that agencies would use in setting levels of support for graduate and postdoctoral fellowships and traineeships. While the total numbers of students, graduates, and employees have increased in STEM fields, percentage changes for women, minorities, and international students varied during the periods reviewed. The increase in the percentage of students in STEM fields was greater than the increase in non-STEM fields, but the change in percentage of graduates in STEM fields was less than the percentage change in non-STEM fields. Moreover, employment increased more in STEM fields than in non-STEM fields. Further, changes in the percentages of minority students varied by race or ethnic group, international graduates continued to earn about a third or more of the advanced degrees in three STEM fields, and there was no statistically significant change in the percentage of women employees. Figure 2 summarizes key changes in the students, graduates, and employees in STEM fields. Total enrollments of students in STEM fields have increased, and the percentage change was greater for STEM fields than non-STEM fields, but the percentage of students in STEM fields remained about the same. From the 1995-1996 academic year to the 2003-2004 academic year, total enrollments in STEM fields increased 21 percent—more than the 11 percent enrollment increase in non-STEM fields. The number of students enrolled in STEM fields represented 23 percent of all students enrolled during the 2003-2004 academic year, a modest increase from the 21 percent these students constituted in the 1995-1996 academic year. Table 9 summarizes the changes in overall enrollment across all education levels from the 1995-1996 academic year to the 2003-2004 academic year. The increase in the numbers of students in STEM fields is mostly a result of increases at the bachelor’s and master’s levels. Of the total increase of about 865,000 students in STEM fields, about 740,000 was due to the increase in the numbers of students at the bachelor’s and master’s levels. See table 23 in appendix IV for additional information on the estimated numbers of students in STEM fields in academic years 1995-1996 and 2003- 2004. The percentage of students in STEM fields who are women increased from the 1995-1996 academic year to the 2003-2004 academic year, and in the 2003-2004 academic year women students constituted at least 50 percent of the students in 3 STEM fields—biological sciences, psychology, and social sciences. However, in the 2003-2004 academic year, men students continued to outnumber women students in STEM fields, and men constituted an estimated 54 percent of the STEM students overall. In addition, men constituted at least 76 percent of the students enrolled in computer sciences, engineering, and technology. See tables 24 and 25 in appendix IV for additional information on changes in the numbers and percentages of women students in the STEM fields for academic years 1995-1996 and 2003-2004. While the numbers of domestic minority students in STEM fields also increased, changes in the percentages of minority students varied by racial or ethnic group. For example, Hispanic students increased 33 percent, from the 1995-1996 academic year to the 2003-2004 academic year. In comparison, the number of African American students increased about 69 percent. African American students increased from 9 to 12 percent of all students in STEM fields while Asian/Pacific Islander students continued to constitute about 7 percent. Table 10 shows the numbers and percentages of minority students in STEM fields for the 1995-1996 academic year and the 2003-2004 academic year. From the 1995-1996 academic year to the 2003-2004 academic year, the number of international students in STEM fields increased by about 57 percent solely because of an increase at the bachelor’s level. The numbers of international students in STEM fields at the master’s and doctoral levels declined, with the largest decline occurring at the doctoral level. Table 11 shows the numbers and percentage changes in international students from the 1995-1996 academic year to the 2003-2004 academic year. According to the Institute of International Education, from the 2002-2003 academic year to the 2003-2004 academic year, the number of international students declined for the first time in over 30 years, and that was the second such decline since the 1954-1955 academic year, when the institute began collecting and reporting data on international students. Moreover, in November 2004, the Council of Graduate Schools (CGS) reported a 6 percent decline in first-time international graduate student enrollment from 2003 to 2004. Following a decade of steady growth, CGS also reported that the number of first-time international students studying in the United States decreased between 6 percent and 10 percent for 3 consecutive years. The number of graduates with degrees in STEM fields increased by 8 percent from the 1994-1995 academic year to the 2002-2003 academic year. However, during this same period the number of graduates with degrees in non-STEM fields increased by 30 percent. From academic year 1994-1995 to academic year 2002-2003, the percentage of graduates with STEM degrees decreased from 32 percent to 28 percent of total graduates. Table 12 provides data on the changes in the numbers and percentages of graduates in STEM and non-STEM fields. Decreases in the numbers of graduates occurred in some STEM fields at each education level, but particularly at the doctoral level. The numbers of graduates with bachelor’s degrees decreased in four of eight STEM fields, the numbers with master’s degrees decreased in five of eight fields, and the numbers with doctoral degrees decreased in six of eight STEM fields. At the doctoral level, these declines ranged from 14 percent in mathematics/computer sciences to 74 percent in technology. Figure 3 shows the percentage change in graduates with degrees in STEM fields from the 1994-1995 academic year to the 2002-2003 academic year. From the 1994-1995 academic year to the 2002-2003 academic year, the total number of women graduates increased in four of the eight fields, and the percentages of women earning degrees in STEM fields increased in six of the eight fields at all three educational levels. Conversely, the total number of men graduates decreased, and the percentages of men graduates declined in six of the eight fields at all three levels from the 1994-1995 academic year to the 2002-2003 academic year. However, men continued to constitute over 50 percent of the graduates in five of eight fields at all three education levels. Table 13 summarizes the numbers of graduates by gender, level, and field. Table 26 in appendix IV provides additional data on the percentages of men and women graduates by STEM field and education level. The total numbers of domestic minority graduates in STEM fields increased, although the percentage of minority graduates with STEM degrees at the master’s or doctoral level did not change from the 1994-1995 academic year to the 2002-2003 academic year. For example, while the number of Native American graduates increased 37 percent, Native American graduates remained less than 1 percent of all STEM graduates at the master’s and doctoral levels. Table 14 shows the percentages and numbers of domestic minority graduates for the 1994-1995 academic year and the 2002-2003 academic year. International students earned about one-third or more of the degrees at both the master’s and doctoral levels in several fields in the 1994-1995 and the 2002-2003 academic years. For example, in academic year 2002-2003, international students earned between 45 percent and 57 percent of all degrees in engineering and mathematics/computer sciences at the master’s and doctoral levels. However, at each level there were changes in the numbers and percentages of international graduates. At the master’s level, the total number of international graduates increased by about 31 percent from the 1994-1995 academic year to the 2002-2003 academic year; while the number of graduates decreased in four of the fields and the percentages of international graduates declined in three fields. At the doctoral level, the total number of international graduates decreased by 12 percent, while the percentage of international graduates increased or remained the same in all fields. Table 15 shows the numbers and percentages of international graduates in STEM fields. While the total number of STEM employees increased, this increase varied across STEM fields. Employment increased by 23 percent in STEM fields as compared to 17 percent in non-STEM fields from calendar year 1994 to calendar year 2003. Employment increased by 78 percent in the mathematics/computer sciences field and by 20 percent in the science field over this period. The changes in number of employees in the engineering and technology fields were not statistically significant. Employment estimates from 1994 to 2003 in the STEM fields are shown in figure 4. From calendar years 1994 to 2003, the estimated number of women employees in STEM fields increased from about 2.7 million to about 3.5 million. Overall, there was not a statistically significant change in the percentage of women employees in the STEM fields. Table 16 shows the numbers and percentages of men and women employed in the STEM fields for calendar years 1994 and 2003. In addition, the estimated number of minorities employed in the STEM fields as well as the percentage of total STEM employees they constituted increased, but African American and Hispanic employees remain underrepresented relative to their percentages in the civilian labor force. Between 1994 and 2003, the estimated number of African American employees increased by about 44 percent, the estimated numbers of Hispanic employees increased by 90 percent, as did the estimated numbers of other minorities employed in STEM fields. In calendar year 2003, African Americans comprised about 8.7 percent of STEM employees compared to about 10.7 percent of the CLF. Similarly, Hispanic employees comprised about 10 percent of STEM employees in calendar year 2003, compared to about 12.6 percent of the CLF. Table 17 shows the estimated percentages of STEM employees by selected racial or ethnic groups in 1994 and 2003. International employees have filled hundreds of thousands of positions, many in STEM fields, through the H-1B visa program. However, the numbers and types of occupations have changed over the years. We reported that while the limit for the H-1B program was 115,000 in 1999, the number of visas approved exceeded the limit by more than 20,000 because of problems with the system used to track the data. Available data show that in 1999, the majority of the approved occupations were in STEM fields. Specifically, an estimated 60 percent of the positions approved in fiscal year 1999 were related to information technology and 5 percent were for electrical/electronics engineering. By 2002, the limit for the H-1B program had increased to 195,000, but the number approved, 79,000, did not reach this limit. In 2003, we reported that the number of approved H- 1B petitions in certain occupations had declined. For example, the number of approvals for systems analysis/programming positions declined by 106,671 from 2001 to 2002. Although the estimated total number of employees in STEM fields increased from 1994 to 2003, according to an NSF report, many with STEM degrees were not employed in these occupations. In 2004, NSF reported that about 67 percent of employees with degrees in science or engineering were employed in fields somewhat or not at all related to their degree. Specifically, 70 percent of employees with bachelor’s degrees, 51 percent with master’s degrees, and 54 percent with doctoral degrees reported that their employment was somewhat or not at all related to their degree in science or engineering. In addition to increases in the numbers of employees in STEM fields, inflation-adjusted median annual wages and salaries increased in all four STEM fields over the 10-year period (1994 to 2003). These increases ranged from 6 percent in science to 15 percent in engineering. Figure 5 shows trends in median annual wages and salaries for STEM fields. University officials, researchers, and students identified several factors that influenced students’ decisions about pursuing STEM degrees and occupations, and they suggested some ways to encourage more participation in STEM fields. Specifically, university officials said and researchers reported that the quality of teachers in kindergarten through 12th grades and the levels of mathematics and science courses completed during high school affected students’ success in and decisions about STEM fields. In addition, several sources noted that mentoring played a key role in the participation of women and minorities in STEM fields. Current students from five universities we visited generally agreed with these observations, and several said that having good mathematics and science instruction was important to their overall educational success. International students’ decisions about participating in STEM education and occupations were affected by opportunities outside the United States and the visa process. To encourage more student participation in the STEM fields, university officials, researchers, and others have made several suggestions, and four were made repeatedly. These suggestions focused on teacher quality, high school students’ math and science preparation, outreach activities, and the federal role in STEM education. University officials frequently cited teacher quality as a key factor that affected domestic students’ interest in and decisions about pursuing STEM degrees and occupations. Officials at all eight universities we visited expressed the view that a student’s experience from kindergarten through the 12th grades played a large role in influencing whether the student pursued a STEM degree. Officials at one university we visited said that students pursuing STEM degrees have associated their interests with teachers who taught them good skills in mathematics or excited them about science. On the other hand, officials at many of the universities we visited told us that some teachers were unqualified and unable to impart the subject matter, causing students to lose interest in mathematics and science. For example, officials at one university we visited said that some elementary and secondary teachers do not have sufficient training to effectively teach students in the STEM fields and that this has an adverse effect on what students learn in these fields and reduces the interest and enthusiasm students express in pursuing coursework in high school, degree programs in college, or careers in these areas. Teacher quality issues, in general, have been cited in past reports by Education. In 2002, Education reported that in the 1999-2000 school year, 14 to 22 percent of middle-grade students taking English, mathematics, and science were in classes led by teachers without a major, minor, or certification in these subjects—commonly referred to as “out-of-field” teachers. Also, approximately 30 to 40 percent of the middle-grade students in biology/life science, physical science, or English as a second language/bilingual education classes had teachers lacking these credentials. At the high school level, 17 percent of students enrolled in physics and 36 percent of those enrolled in geology/earth/space science were in classes instructed by out-of-field teachers. The percentages of students taught by out-of-field teachers were significantly higher when the criteria used were teacher certification and a major in the subject taught. For example, 45 percent of the high school students enrolled in biology/life science and approximately 30 percent of those enrolled in mathematics, English, and social science classes had out-of-field teachers. During the 2002-2003 school year, Education reported that the number and distribution of teachers on waivers—which allowed prospective teachers in classrooms while they completed their formal training—was problematic. Also, states reported that the problem of underprepared teachers was worse on average in districts that serve large proportions of high-poverty children—the percentage of teachers on waivers was larger in high-poverty school districts than all other school districts in 39 states. Moreover, in 2004, Education reported that 48 of the 50 states granted waivers. In addition to teacher quality, students’ high school preparation in mathematics and science was cited by university officials and others as affecting students’ success in college-level courses and their decisions about pursuing STEM degrees and occupations. University officials at six of the eight universities we visited cited students’ ability to opt out of mathematics and science courses during high school as a factor that influenced whether they would participate and succeed in the STEM fields during undergraduate and graduate school. University officials said, for example, that because many students had not taken higher-level mathematics and science courses such as calculus and physics in high school, they were immediately behind other students who were better prepared. In July 2005, on the basis of findings from the 2004 National Assessment of Educational Progress, the National Center for Education Statistics reported that 17 percent of the 17-year-olds reported that they had taken calculus, and this represents the highest percentage in any previous assessment year. In a study that solicited the views of several hundred students who had left the STEM fields, researchers found that the effects of inadequate high school preparation contributed to college students’ decisions to leave the science fields. These researchers found that approximately 40 percent of those college students who left the science fields reported some problems related to high school science preparation. The underpreparation was often linked to problems such as not understanding calculus; lack of laboratory experience or exposure to computers, and no introduction to theoretical material or to analytic modes of thought. Further, 12 current students we interviewed said they were not adequately prepared for college mathematics or science. For example, one student stated that her high school courses had been limited because she attended an all-girls school where the curriculum catered to students who were not interested in STEM, and so it had been difficult to obtain the courses that were of interest to her. Several other factors were mentioned during our interviews with university officials, students, and others as influencing decisions about participation in STEM fields. These factors included relatively low pay in STEM fields, additional tuition costs to obtain STEM degrees, lack of commitment on the part of some students to meet the rigorous academic demands, and the inability of some professors in STEM fields to effectively impart their knowledge to students in the classrooms. For example, officials from five universities said that low pay in STEM fields relative to other fields such as law and business dissuaded students from pursuing STEM degrees in some areas. Also, in a study that solicited the views of college students who left the STEM fields as well as those who continued to pursue STEM degrees, researchers found that students experienced greater financial difficulties in obtaining their degrees because of the extra time needed to obtain degrees in certain STEM fields. Researchers also noted that poor teaching at the university level was the most common complaint among students who left as well as those who remained in STEM fields. Students reported that faculty do not like to teach, do not value teaching as a professional activity, and therefore lack any incentive to learn to teach effectively. Finally, 11 of the students we interviewed commented about the need for professors in STEM fields to alter their methods and to show more interest in teaching to retain students’ attention. University officials and students said that mentoring is important for all students but plays a vital role in the academic experiences of women and minorities in the STEM fields. Officials at seven of the eight universities discussed the important role that mentors play, especially for women and minorities in STEM fields. For example, one professor said that mentors helped students by advising them on the best track to follow for obtaining their degrees and achieving professional goals. Also, four students we interviewed—three women and one man—expressed the importance of mentors. Specifically, while all four students identified mentoring as critical to academic success in the STEM fields, two students expressed their satisfaction since they had mentors, while the other two students said that it would have been helpful to have had someone who could have been a mentor or role model. Studies have also reported that mentors play a significant role in the success of women and minorities in the STEM fields. In 2004, some of the women students and faculty with whom we talked reported a strong mentor was a crucial part in the academic training of some of the women participating in sciences, and some women had pursued advanced degrees because of the encouragement and support of mentors. In September 2000, a congressional commission reported that women were adversely affected throughout the STEM education pipeline and career path by a lack of role models and mentors. For example, the report found that girls rejection of mathematics and science may be partially driven by teachers, parents, and peers when they subtly, and not so subtly, steer girls away from the informal technical pastimes (such as working on cars, fixing bicycles, and changing hardware on computers) and science activities (such as science fairs and clubs) that too often were still thought of as the province of boys. In addition, the commission reported that a greater proportion of women switched out of STEM majors than men, relative to their representation in the STEM major population. Reasons cited for the higher attrition rate among women students included lack of role models, distaste for the competitive nature of science and engineering education, and inability to obtain adequate academic guidance or advice. Further, according to the report, women’s retention and graduation in STEM graduate programs were affected by their interaction with faculty, integration into the department (versus isolation), and other factors, including whether there were role models, mentors, and women faculty. Officials at seven of the eight universities visited, along with education policy experts, told us that competition from other countries for top international students, and educational or work opportunities, affected international students’ decisions about studying in the United States. They told us that other countries, including Canada, Australia, New Zealand, and the United Kingdom, had seized the opportunity since September 11 to compete against the United States for international students who were among the best students in the world, especially in the STEM fields. Also, university officials told us that students from several countries, including China and India, were being recruited to attend universities and get jobs in their own countries. In addition, education organizations and associations have reported that global competition for the best science and engineering students and scholars is under way. One organization, NAFSA: Association of International Educators reported that the international student market has become highly competitive, and the United States is not competing as well as other countries. According to university officials, international students’ decisions about pursuing STEM degrees and occupations in the United States were also influenced by the perceived unwelcoming attitude of Americans and the visa process. Officials from three of the universities said that the perceived unwelcoming attitude of Americans had affected the recruitment of international students to the United States. Also, officials at six of the eight universities visited expressed their concern about the impact of the tightened visa procedures and/or increased security measures since September 11 on international graduate school enrollments. For example, officials at one university stated that because of the time needed to process visas, a few students had missed their class start dates. Officials from one university told us that they were being more proactive in helping new international students navigate the visa system, to the extent possible. While some university officials acknowledged that visa processing had significantly improved, since 2003 several education associations have requested further changes in U.S. visa policies because of the lengthy procedures and time needed to obtain approval to enter the country. We have reported on various aspects of the visa process, made several recommendations, and noted that some improvements have been made. In October 2002 we cited the need for a clear policy on how to balance national security concerns with the desire to facilitate legitimate travel when issuing visas and we made several recommendations to help improve the visa process. In 2003, we reported that the Departments of State, Homeland Security, and Justice could more effectively manage the visa function if they had clear and comprehensive policies and procedures and increased agency coordination and information sharing. In February 2004 and February 2005, we reported on the State Department’s efforts to improve the program for issuing visas to international science students and scholars. In 2004 we found that the time to adjudicate a visa depended largely on whether an applicant had to undergo a security check known as Visas Mantis, which is designed to protect against sensitive technology transfers. Based on a random sample of Visas Mantis cases for science students and scholars, it took State an average of 67 days to complete the process. In 2005, we reported a significant decline in Visas Mantis processing times and in the number of cases pending more than 60 days. We also reported that, in some cases, science students and scholars can obtain a visa within 24 hours. We have also issued several reports on SEVIS operations. In June 2004 we noted that when SEVIS began operating, significant problems were reported. For example, colleges and universities and exchange programs had trouble gaining access to the system, and when access was obtained, these users’ sessions would “time out” before they could complete their tasks. In that report we also noted that SEVIS performance had improved, but that several key system performance requirements were not being measured. In March 2005, we reported that the Department of Homeland Security (DHS) had taken steps to address our recommendations and that educational organizations generally agreed that SEVIS performance had continued to improve. However, educational organizations continued to cite problems, which they believe created hardships for students and exchange visitors. To increase the number of students entering STEM fields, officials from seven universities and others stated that teacher quality needs to improve. Officials of one university said that kindergarten through 12th grade classrooms need teachers who are knowledgeable in the mathematics and science content areas. As previously noted, Education has reported on the extent to which classes have been taught by teachers with little or no content knowledge in the STEM fields. The Congressional Commission on the Advancement of Women and Minorities reported that teacher effectiveness is the most important element in a good education. The commission also suggested that boosting teacher effectiveness can do more to improve education than any other single factor. States are taking action to meet NCLBA’s requirement of having all teachers of core academic subjects be highly qualified by the end of the 2005-2006 school year. University officials and some students suggested that better preparation and mandatory courses in mathematics and science were needed for students during their kindergarten through 12th grade school years. Officials from five universities suggested that mandatory mathematics and science courses, especially in high school, may lead to increased student interest and preparation in the STEM fields. With a greater interest and depth of knowledge, students would be better prepared and more inclined to pursue STEM degrees in college. Further, nearly half of the students who replied to this question suggested that students needed additional mathematics and science training prior to college. However, adding mathematics and science classes has resource implications, since more teachers in these subjects would be needed. Also this change could require curriculum policy changes that would take time to implement. More outreach, especially to women and minorities from kindergarten through the 12th grade, was suggested by university officials, students, and other organizations. Officials from six of the universities we visited suggested that increased outreach activities are needed to help create more interest in mathematics and science for younger students. For example, at one university we visited, officials told us that through inviting students to their campuses or visiting local schools, they have provided some students with opportunities to engage in science laboratories and hands-on activities that foster interest and excitement for students and can make these fields more relevant in their lives. Officials from another university told us that these experiences were especially important for women and minorities who might not have otherwise had these opportunities. The current students we interviewed also suggested more outreach activities. Specifically, two students said that outreach was needed to further stimulate students’ interest in the STEM fields. One organization, Building Engineering and Science Talent (BEST), suggested that research universities increase their presence in prekindergarten through 12th grade mathematics and science education in order to strengthen domestic students’ interests and abilities. BEST reported that one model producing results entailed universities adopting students from low-income school districts from 7th through 12th grades and providing them advanced instruction in algebra, chemistry, physics, and trigonometry. However, officials at one university told us that because of limited resources, their efforts were constrained and only a few students would benefit from this type of outreach. Furthermore, university officials from the eight schools and other education organizations made suggestions regarding the role of the federal government. University officials suggested that the federal government could enhance its role in STEM education by providing more effective leadership through developing and implementing a national agenda for STEM education and increasing federal funding for academic research. Officials at six universities suggested that the federal government undertake a new initiative modeled after the National Defense Education Act of 1958, enacted in response to the former Soviet Union’s achievement in its space program, which provided new funding for mathematics and science education and training at all education levels. In June 2005, CGS called for a renewed commitment to graduate education by the federal government through actions such as providing funds to support students trained at the doctoral level in the sciences, technology, engineering, and mathematics; expanding U.S. citizen participation in doctoral study in selected fields through graduate support awarded competitively to universities across the country; requiring recruitment, outreach, and mentoring activities that promote greater participation and success, especially for underrepresented groups; and fostering interdisciplinary research preparation. In August 2003, the National Science Board recommended that the federal government direct substantial new support to students and institutions in order to improve success in science and engineering studies by domestic undergraduate students from all demographic groups. According to this report, such support could include scholarships and other forms of financial assistance to students, incentives to institutions to expand and improve the quality of their science and engineering programs in areas in which degree attainment is insufficient, financial support to community colleges to increase the success of students in transferring to 4-year science and engineering programs, and expanded funding for programs that best succeed in graduating underrepresented minorities and women in science and engineering. BEST also suggested that the federal government allocate additional resources to expand the mathematics and science education opportunities for underrepresented groups. However, little is known about how well federal resources have been used in the past. Changes that would require additional federal funds would likely have an impact on other federal programs, given the nation’s limited resources and growing fiscal imbalance, and changing the federal role could take several years. While the total numbers of STEM graduates have increased, some fields have experienced declines, especially at the master’s and doctoral levels. Given the trends in the numbers and percentages of students pursuing STEM degrees, particularly advanced degrees, and recent developments that have influenced international students’ decisions about pursuing degrees in the United States, it is uncertain whether the number of STEM graduates will be sufficient to meet future academic and employment needs and help the country maintain its technological competitive advantage. Moreover, it is too early to tell if the declines in international graduate student enrollments will continue in the future. In terms of employment, despite some gains, the percentage of women in the STEM workforce has not changed significantly, minority employees remain underrepresented, and many with degrees in STEM fields are not employed in STEM occupations. To help improve the trends in the numbers of students, graduates, and employees in STEM fields, university officials and others made several suggestions, such as increasing the federal commitment to STEM education programs. However, before making changes, it is important to know the extent to which existing STEM education programs are appropriately targeted and making the best use of available federal resources. Additionally, in an era of limited financial resources and growing federal deficits, information about the effectiveness of these programs can help guide policy makers and program managers. We received written comments on a draft of this report from Commerce, the Department of Health and Human Services (HHS), NSF, and NSTC. These comments are reprinted in appendixes VII, VIII, IX, and X, respectively. We also received technical comments from the Departments of Commerce, Health and Human Services, Homeland Security, Labor, and Transportation; and the Environmental Protection Agency and National Aeronautics and Space Administration, which we incorporated when appropriate. In commenting on a draft of this report, Commerce, HHS, and NSTC commended GAO for this work. Commerce explicitly concurred with several findings and agreed with our overall conclusion. However, Commerce suggested that we revise the conclusion to point out that despite overall increases in STEM students, the numbers of graduates in certain fields have declined. We modified the concluding observations to make this point. HHS agreed with our conclusion that it is important to evaluate ongoing programs to determine the extent to which they are achieving their desired results. The comments from NSTC cited improvements made to help ensure that international students, exchange visitors, and scientists are able to apply for and receive visas in a timely manner. We did not make any changes to the report since we had cited another GAO product that discussed such improvements in the visa process. NSF commented about several of our findings. NSF stated that our program evaluations finding may be misleading largely because the type of information GAO requested and accepted from agencies was limited to program level evaluations and did not include evaluations of individual underlying projects. NSF suggested that we include information on the range of approaches used to assure program effectiveness. Our finding is based on agency officials’ responses to a survey question that did not limit or stipulate the types of evaluations that could have been included. Nonetheless, we modified the report to acknowledge that NSF uses various approaches to evaluate its programs. NSF criticized the methodology we used to support our finding on the factors that influence decisions about pursuing STEM fields and suggested that we make it clearer in the body of the report that the findings are based on interviews with educators and administrators from 8 colleges and universities, and responses from 31 students. Also, NSF suggested that we improve the report by including corroborating information from reports and studies. Our finding was not limited to interviews at the 8 colleges and universities and responses from 31 current students but was also based on interviews with numerous representatives and policy experts from various organizations as well as findings from research and reports—which are cited in the body of the report. Using this approach, we were able to corroborate the testimonial evidence with data from reports and research as well as to determine whether information in the reports and research remained accurate by seeking the views of those currently teaching or studying in STEM fields. As NSF noted, this approach yielded reasonable observations. Additional information about our methodology is listed in appendix I, and we added a bibliography that identifies the reports and research used during the course of this review. NSF also commented that the report mentions the NSTC efforts for interagency collaboration, but does not mention other collaboration efforts such as the Federal Interagency Committee on Education and the Federal Interagency Coordinating Council. NSF also pointed out that interagency collaboration occurs at the program level. We did not modify the report in response to this comment. In conducting our work, we determined that the NSTC effort was the primary mechanism for interagency collaboration focused on STEM programs. The coordinating groups cited by NSF are focused on different issues. The Federal Interagency Committee on Education was established to coordinate the federal programs, policies, and practices affecting education broadly, and the Federal Interagency Coordinating Council was established to minimize duplication of programs and activities relating to children with disabilities. In addition, NSF provided information to clarify examples related to their programs that we cited in the report, stated that some data categories were not clear, and commented on the graduate level enrollment data we used in the report. NSF pointed out that while its program called Opportunities for Enhancing Diversity in the Geosciences is designed to increase participation by minorities, it does not limit eligibility to minorities. Also, NSF noted that while the draft report correctly indicated that students receiving scholarships or fellowships from NSF must be U.S. citizens or permanent residents, the reason given for limiting participation in these programs in the draft report was not accurate. According to NSF, these restrictions are considered to be an effective strategy to support its goal of creating a diverse, competitive and globally engaged U.S. workforce of scientists, engineers, technologists and well prepared citizens. We revised the report to reflect these changes. Further, NSF commented that the data categories were not clear, particularly the technology degrees and occupations, and that the data did not include associate degrees. We added information that lists all of the occupations included in the analysis, and we added footnotes to clarify which data included associate degrees and which ones did not. In addition, NSF commented that the graduate level enrollment data for international students based on NPSAS data are questionable in comparison with other available data and that this may be because the NPSAS data include a relatively small sample for graduate education. We considered using NPSAS and other data but decided to use the NPSAS data for two reasons: NPSAS data were more comprehensive and more current. Specifically, the NPSAS data were available through the 2003-2004 academic year and included numbers and characteristics of students enrolled for all degree fields—STEM and non-STEM—for all education levels, and citizenship information. Copies of this report are being sent to the Secretaries of Agriculture, Commerce, Education, Energy, Health and Human Services, Interior, Homeland Security, Labor, and Transportation; the Administrators for the Environmental Protection Agency and the National Aeronautics and Space Administration; and the Directors of the National Science Foundation and the National Science and Technology Council; appropriate congressional committees; and interested parties. Copies will be made available to others upon request. The report is also available on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-7215 or ashbyc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. The objectives of our study were to determine (1) the number of federal civilian education programs funded in fiscal year 2004 that were specifically designed to increase the number of students and graduates pursuing science, technology, engineering, and mathematics (STEM) degrees and occupations, or improve educational programs in STEM fields, and what agencies report about their effectiveness; (2) how the numbers, percentages, and characteristics of students, graduates, and employees in STEM fields have changed over the years; and (3) factors cited by educators and others as influencing people’s decisions about pursuing STEM degrees and occupations, and suggestions to encourage greater participation in STEM fields. In conducting our review, we used multiple methodologies. We (1) conducted a survey of federal departments and agencies that sponsored education programs specifically designed to increase the number of students and graduates pursuing STEM degrees and occupations or improve educational programs in STEM fields; (2) obtained and analyzed data, including the most recent data available, on students, graduates, and employees in STEM fields and occupations; (3) visited eight colleges and universities; (4) reviewed reports and studies; and (5) interviewed agency officials, representatives and policy experts from various organizations, and current students. We conducted our work between October 2004 and October 2005 in accordance with generally accepted government auditing standards. To provide Congress with a better understanding of what programs federal agencies were supporting to increase the nation’s pool of scientists, technologists, engineers, and mathematicians, we designed a survey to determine (1) the number of federal education programs (prekindergarten through postdoctorate) designed to increase the quantity of students and graduates pursuing STEM degrees and occupations or improve the educational programs in STEM fields and (2) what agencies reported about the effectiveness of these programs. The survey asked the officials to describe the goals, target population, and funding levels for fiscal years 2003, 2004, and 2005 of such programs. In addition, the officials were asked when the programs began and if the programs had been or were being evaluated. We identified the agencies likely to support STEM education programs by reviewing the Catalog of Federal Domestic Assistance and the Department of Education’s Eisenhower National Clearinghouse, Guidebook of Federal Resources for K-12 Mathematics and Science, 2004-05. Using these resources, we identified 15 agencies with STEM education programs. The survey was conducted via e-mail using an ActiveX enabled MSWord attachment. A contact point was designated for each agency, and questionnaires were sent to that individual. One questionnaire was completed for each program the agency sponsored. Agency officials were asked to provide confirming documentation for their responses whenever possible. The questionnaire was forwarded to agencies on February 15, 2005, and responses were received through early May 2005. We received 244 completed surveys and determined that 207 of them met the criteria for STEM programs. The following agencies participated in our survey: the Departments of Agriculture, Commerce, Education, Energy, Homeland Security, Interior, Labor, and Transportation. In addition, the Health Resources and Services Administration, Indian Health Service, and National Institutes of Health, all part of Health and Human Services, took part in the survey. Also participating were the U.S. Environmental Protection Agency; the National Aeronautics and Space Administration; and the National Science Foundation. Labor’s programs did not meet our criteria for 2004 and the Department of Defense (DOD) did not submit a survey. According to DOD officials, DOD needed 3 months to complete the survey and therefore could not provide responses within the time frames of our work. We obtained varied amounts of documentation from 13 civilian agencies for the 207 STEM education programs funded in 2004 and information about the effectiveness of some programs. Because we administered the survey to all of the known federal agencies sponsoring STEM education programs, our results are not subject to sampling error. However, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents in answering a question, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in the development of the survey, the collection of data, and the editing and analysis of data for the purpose of minimizing such nonsampling errors. To reduce nonsampling error, the questionnaire was reviewed by survey specialists and pretested in person with three officials from agencies familiar with STEM education programs to develop a questionnaire that was relevant, easy to comprehend, unambiguous, and unbiased. We made changes to the content and format of the questionnaire based on the specialists’ reviews and the results of the pretests. To further reduce nonsampling error, data for this study returned electronically were entered directly into the instrument by the respondents and converted into a database for analysis. Completed questionnaires returned as hard copy were keypunched, and a sample of these records was verified by comparing them with their corresponding questionnaires, and any errors were corrected. When the data were analyzed, a second, independent analyst checked all computer programs. Finally, to assess the reliability of key data obtained from our survey about some of the programs, we compared the responses with the documentation provided, or we independently researched the information from other publicly available sources. To determine how the numbers and characteristics of students, graduates, and employees in STEM fields have changed, we obtained and analyzed data from the Department of Education (Education) and the Department of Labor. Specifically, we analyzed the National Postsecondary Student Aid Study (NPSAS) data and the Integrated Postsecondary Education Data System (IPEDS) data from the Department of Education’s National Center for Education Statistics (NCES), and we analyzed data from the Department of Labor’s Bureau of Labor Statistics’ (BLS) Current Population Survey (CPS). Based on National Science Foundation’s categorization of STEM fields, we developed STEM fields of study from NPSAS and IPEDS, and identified occupations from the CPS. Using these data sources, we developed nine STEM fields for students, eight STEM fields for graduates, and four broad STEM fields for occupations. For our data reliability assessment, we reviewed agency documentation on the data sets and conducted electronic tests of the files. On the basis of these reviews, we determined that the required data elements from NPSAS, IPEDS and CPS were sufficiently reliable for our purposes. These data sources, type, time span, and years analyzed are shown in table 18. NPSAS is a comprehensive nationwide study designed to determine how students and their families pay for postsecondary education, and to describe some demographic and other characteristics of those enrolled. The study is based on a nationally representative sample of students in postsecondary education institutions, including undergraduate, graduate, and first-professional students. The NPSAS has been conducted every several years since the 1986-1987 academic year. For this report, we analyzed the results of the NPSAS survey for the 1995-1996 academic year and the 2003-2004 academic year to compare student enrollment and demographic characteristics between these two periods for the nine STEM fields and non-STEM fields. Because the NPSAS sample is a probability sample of students, the sample is only one of a large number of samples that might have been drawn. Since each sample could have provided different estimates, confidence in the precision of the particular sample’s results is expressed as a 95-percent confidence interval (for example, plus or minus 4 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples that could have been drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. NPSAS estimates used in this report and the upper and lower bounds of the 95 percent confidence intervals for each estimate relied on in this report are presented in appendix V. IPEDS is a single, comprehensive system designed to encompass all institutions and educational organizations whose primary purpose is to provide postsecondary education. IPEDS is built around a series of interrelated surveys to collect institution-level data in such areas as enrollments, program completions, faculty, staff, and finances. For this report, we analyzed the results of IPEDS data for the 1994-1995 academic year and the 2002-2003 academic year to compare the numbers and characteristics of graduates with degrees in eight STEM fields and non- STEM fields. To analyze changes in employees in STEM and non-STEM fields, we obtained employment estimates from BLS’s Current Population Survey March supplement for 1995 through 2004 (calendar years 1994 through 2003). The CPS is a monthly survey of households conducted by the U.S. Census Bureau (Census) for BLS. The CPS provides a comprehensive body of information on the employment and unemployment experience of the nation’s population, classified by age, sex, race, and a variety of other characteristics. A more complete description of the survey, including sample design, estimation, and other methodology can be found in the CPS documentation prepared by Census and BLS. This March supplement (the Annual Demographic Supplement) is specifically designed to estimate family characteristics, including income from all sources and occupation and industry classification of the job held longest during the previous year. It is conducted during the month of March each year because it is believed that since March is the month before the deadline for filing federal income tax returns, respondents would be more likely to report income more accurately than at any other point during the year. We used the CPS data to produce estimates on (1) four STEM fields, (2) men and women, (3) two separate minority groups (Black or African American, and Hispanic or Latino origin), and (4) median annual wages and salaries. The measures of median annual wages and salaries could include bonuses, but do not include noncash benefits such as health insurance or pensions. CPS salary reported in March of each year was for the longest held position actually worked the year before and reported by the worker himself (or a knowledgeable member of the household). Tables 19 and 20 list the classification codes and occupations included in our analysis of CPS data over a 10-year period (1994-2003). In developing the STEM groups, we considered the occupational requirements and educational attainment of individuals in certain occupations. We also excluded doctors and other health care providers except registered nurses. During the period of review, some codes and occupation titles were changed; we worked with BLS officials to identify variations in codes and occupations and accounted for these changes where appropriate and possible. Because the CPS is a probability sample based on random selections, the sample is only one of a large number of samples that might have been drawn. Since each sample could have provided different estimates, confidence in the precision of the particular sample’s results is expressed as a 95 percent confidence interval (e.g., plus or minus 4 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples that could have been drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. We use the CPS general variance methodology to estimate this sampling error and report it as confidence intervals. Percentage estimates we produce from the CPS data have 95 percent confidence intervals of plus or minus 6 percentage points or less. Estimates other than percentages have 95 percent confidence intervals of no more than plus or minus 10 percent of the estimate itself, unless otherwise noted. Consistent with the CPS documentation guidelines, we do not produce estimates based on the March supplement data for populations of less than 75,000. GAO’s internal control procedures provide reasonable assurance that our data analyses are appropriate for the purposes we are using them. These procedures include, but are not limited to, having skilled staff perform the analyses, supervisory review by senior analysts, and indexing/referencing (confirming that the analyses are supported by the underlying audit documentation) activities. We interviewed administrators and professors during site visits to eight colleges and universities—the University of California at Los Angeles and the University of Southern California in California; Clark Atlanta University, Georgia Institute of Technology, and Spelman College in Georgia; the University of Illinois; Purdue University in Indiana; and Pennsylvania State University. These colleges and universities were selected based on the following factors: large numbers of domestic and international students in STEM fields, a mix of public and private institutions, number of doctoral degrees conferred, and some geographic diversity. We also selected three minority-serving colleges and universities, one of which serves only women students. Clark Atlanta University and Spelman College were selected, in part, because of their partnerships with the College of Engineering at the Georgia Institute of Technology. During these visits we asked the university officials about factors that influenced whether people pursue a STEM education or occupations and suggestions for addressing those factors that may influence participation. For example, we asked university officials to identify (1) issues related to the education pipeline; (2) steps taken by their university to alleviate some of the conditions that may discourage student participation in STEM areas; and (3) the federal role, if any, in attracting and retaining domestic students in STEM fields. We also obtained documents on programs they sponsored to help support STEM students and graduates. We reviewed several articles, reports, and books related to trends in STEM enrollment and factors that have an effect on people’s decisions to pursue STEM fields. For two studies, we evaluated the methodological soundness using common social science and statistical practices. We examined each study’s methodology, including its limitations, data sources, analyses, and conclusions. Talking about Leaving: Why Undergraduates Leave the Sciences, by Elaine Seymour and Nancy Hewitt. This study used interviews and focus groups/group interviews at selected universities to identify self- reported reasons for changing majors from science, mathematics, or engineering. The study had four primary objectives: (1) to identify sources of qualitative differences in educational experiences of science, mathematics, and engineering students at higher educational institutions of different types; (2) to identify differences in structure, culture, and pedagogy of science, mathematics, and engineering departments and the impact on student retention; (3) to compare and contrast causes of science, mathematics, and engineering students’ attrition by race/ethnicity and gender; and (4) to estimate the relative importance of factors found to contribute to science, mathematics, and engineering students’ attrition. The researchers selected seven universities to represent the types of colleges and universities that supply most of the nations’ scientists, mathematicians, and engineers. The types of institutions were selected to test whether there are differences in educational experiences, culture and pedagogy, race/ethnicity and gender attrition, and reasons for attrition by type of institution. Because the selection of students was not strictly random and because there is no documentation that the data were weighted to reflect the proportions of types of students selected, it is not possible to determine confidence intervals. Thus it is not possible to say which differences are statistically significant. The findings are now more than a decade old and thus might not reflect current pedagogy and other factors about the educational experience, students, or the socioeconomic environment. It is important to note that the quantitative results of this study are based on the views of one constituency or stakeholder—students. Views of faculty, school administrators, graduates, professional associations, and employers are not included. NCES’s Qualifications of the Public School Teacher Workforce: Prevalence of Out-of-Field Teaching, 1987-1988 to 1999-2000 report. This study is an analysis based upon the Schools and Staffing Survey for 1999-2000. The report was issued in 2004 by the Institute of Education Sciences, U.S. Department of Education. NCES’s Schools and Staffing Survey (SASS) is a representative sample of U.S. schools, districts, principals, and teachers. The report focusing on teacher’s qualifications uses data from the district and teacher portion of SASS. The 1999-2000 SASS included a nationally representative sample of public schools and universe of all public charter schools with students in any of grades 1 through 12 and in operation in school year 1999-2000. The 1999-2000 SASS administration also included nationally representative samples of teachers in the selected public and public charter schools who taught students in grades kindergarten through 12 in school year 1999-2000. There were 51,811 public school teachers in the sample and 42,086 completed public school teacher interviews. In addition, there are 3,617 public charter school teachers in the sample with 2,847 completed interviews. The overall weighted teacher response rate was 76.7 percent for public school teachers and 71.8 percent for public charter school teachers. NCES has strong standards for carrying out educational surveys. The Office of Management and Budget vetted the questionnaire and sample design. The Census Bureau carried out survey quality control and data editing. One potential limitation is the amount of time it takes the Census Bureau to get the data from field collection to public release, but this is partly due to the thoroughness of the data quality steps followed. The SASS survey meets GAO standards for use as evidence in a report. We interviewed officials from 13 federal agencies with STEM education programs to obtain information about the STEM programs and their views on related topics, including factors that influence students’ decisions about pursuing STEM degrees and occupations, and the extent of coordination among the federal agencies. We also interviewed officials from the National Science and Technology Council to discuss coordination efforts. In addition, we interviewed representatives and policy experts from various organizations. These organizations were the American Association for the Advancement of Science, the Commission on Professionals in Science and Technology, the Council of Graduate Schools, NAFSA: Association of International Educators, the National Academies, and the Council on Competitiveness. We also conducted interviews via e-mail with 31 students. We asked officials from the eight universities visited to identify students to complete our e-mail interviews, and students who completed the interviews attended five of the colleges we visited. Of the 31 students: 16 attended Purdue University, 6 attended the University of Southern California, 6 attended Spelman College, 2 attended the University of California Los Angeles, and 1 attended the Georgia Institute of Technology. In addition, 19 students were undergraduates and 12 were graduate students; 19 students identified themselves as women and 12 students identified themselves as men. Of the 19 undergraduate students, 9 said that they plan to pursue graduate work in a STEM field. Based on surveys submitted by officials representing the 13 civilian federal agencies, table 21 contains a list of the 207 science, technology, engineering, and mathematics (STEM) education programs funded in fiscal year 2004. The federal civilian agencies reported that the following science, technology, engineering, and mathematics (STEM) education programs were funded with at least $10 million in either fiscal year 2004 or 2005. However, programs that received $10 million or more in fiscal year 2004 but were unfunded for fiscal year 2005 were excluded from table 22. Agency officials also provided the program descriptions in table 22. Table 23 provides estimates for the numbers of students in science, technology, engineering, and mathematics (STEM) fields by education level for the 1995-1996 and 2003-2004 academic years. Tables 24 and 25 provide additional information regarding students in STEM fields by gender for the 1995-1996 and 2003-2004 academic years. Table 26 provides additional information regarding graduates in STEM fields by gender for the 1994-1995 and 2002-2003 academic years. Appendix V contains confidence intervals for these estimates. Because the National Postsecondary Student Aid Study (NPSAS) sample is a probability sample of students, the sample is only one of a large number of samples that might have been drawn. Since each sample could have provided different estimates, confidence in the precision of the particular sample’s results is expressed as a 95-percent confidence interval (for example, plus or minus 4 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples that could have been drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. The upper and lower bounds of the 95 percent confidence intervals for each estimate relied on in this report are presented in the following tables. The current population survey (CPS) was used to obtain estimates about employees and wages and salaries in science, technology, engineering, and mathematics (STEM) fields. Because the current population survey (CPS) is a probability sample based on random selections, the sample is only one of a large number of samples that might have been drawn. Since each sample could have provided different estimates, confidence in the precision of the particular sample’s results is expressed as a 95 percent confidence interval (e.g., plus or minus 4 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples that could have been drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. We use the CPS general variance methodology to estimate this sampling error and report it as confidence intervals. Percentage estimates we produce from the CPS data have 95 percent confidence intervals of plus or minus 6 percentage points or less. Estimates other than percentages have 95 percent confidence intervals of no more than plus or minus 10 percent of the estimate itself, unless otherwise noted. Consistent with the CPS documentation guidelines, we do not produce estimates based on the March supplement data for populations of less than 75,000. In addition to the contact named above, Carolyn M. Taylor, Assistant Director; Tim Hall, Analyst in Charge; Mark Ward; Dorian Herring; Patricia Bundy; Paula Bonin; Scott Heacock; Wilfred Holloway; Lise Levie; John Mingus; Mark Ramage; James Rebbe; and Monica Wolford made key contributions to this report. Congressional Research Service, Foreign Students in the United States: Policies and Legislation, RL31146, January 24, 2003, Washington, D.C. Congressional Research Service, Immigration: Legislative Issues on Nonimmigrant Professional Specialty (H-1B) Workers, RL30498, May 5, 2005, Washington, D.C. Congressional Research Service, Monitoring Foreign Students in the United States: The Student and Exchange Visitor Information System (SEVIS), RL32188, October 20, 2004, Washington, D.C. Congressional Research Service, Science, Engineering, and Mathematics Education: Status and Issues, 98-871 STM, April 27, 2004, Washington, D.C. Council on Competitiveness, Innovate America, December 2004, Washington, D.C. Council of Graduate Schools, NDEA 21: A Renewed Commitment to Graduate Education, June 2005, Washington, D.C. Institute of International Education, Open Doors: Report on International Educational Exchange, 2004, New York. Jackson, Shirley Ann, The Quiet Crisis: Falling Short in Producing American Scientific and Technical Talent, Building Engineering & Science Talent, September 2002, San Diego, California. NAFSA: Association of International Educators, In America’s Interest: Welcoming International Students, Report of the Strategic Task Force on International Student Access, January 14, 2003, Washington, D.C. NAFSA: Association of International Educators, Toward an International Education Policy for the United States: International Education in an Age of Globalism and Terrorism, May 2003, Washington, D.C. National Center for Education Statistics, Qualifications of the Public School Teacher Workforce: Prevalence of Out-of-Field Teaching 1987-88 to 1999-2000, May 2002, revised August 2004, Washington, D.C. National Science Foundation, The Science and Engineering Workforce Realizing America’s Potential, National Science Board, August 14, 2003, Arlington, Virginia. National Science Foundation, Science and Engineering Indicators, 2004, Volume 1, National Science Board, January 15, 2004, Arlington, Virginia. Report of the Congressional Commission on the Advancement of Women and Minorities in Science, Engineering and Technology Development, Land of Plenty: Diversity as America’s Competitive Edge in Science, Engineering, and Technology, September 2000. A Report to the Nation from the National Commission on Mathematics and Science Teaching for the 21st Century, Before It’s Too Late, September 27, 2000. Seymour, Elaine, and Nancy M. Hewitt, Talking about Leaving: Why Undergraduates Leave the Sciences, Westview Press, 1997, Boulder, Colorado. The National Academies, Policy Implications of International Graduate Students and Postdoctoral Scholars in the United States, 2005, Washington, D.C. U.S. Department of Education, National Center for Education Statistics, Institute of Education Sciences, The Nation’s Report Card, NAEP 2004: Trends in Academic Progress, July 2005, Washington, D.C. U.S. Department of Education, The Secretary’s Third Annual Report on Teacher Quality, Office of Postsecondary Education, 2004, Washington, D.C. U.S. Department of Homeland Security, 2003 Yearbook of Immigration Statistics, Office of Immigration Statistics, September 2004, Washington, D.C.
The United States has long been known as a world leader in scientific and technological innovation. To help maintain this advantage, the federal government has spent billions of dollars on education programs in the science, technology, engineering, and mathematics (STEM) fields for many years. However, concerns have been raised about the nation's ability to maintain its global technological competitive advantage in the future. This report presents information on (1) the number of federal programs funded in fiscal year 2004 that were designed to increase the number of students and graduates pursuing STEM degrees and occupations or improve educational programs in STEM fields, and what agencies report about their effectiveness; (2) how the numbers, percentages, and characteristics of students, graduates, and employees in STEM fields have changed over the years; and (3) factors cited by educators and others as affecting students' decisions about pursing STEM degrees and occupations, and suggestions that have been made to encourage more participation. GAO received written and/or technical comments from several agencies. While one agency, the National Science Foundation, raised several questions about the findings, the others generally agreed with the findings and conclusion and several agencies commended GAO for this work. Officials from 13 federal civilian agencies reported spending about $2.8 billion in fiscal year 2004 for 207 education programs designed to increase the numbers of students and graduates or improve educational programs in STEM fields, but agencies reported little about their effectiveness. The National Institutes of Health and the National Science Foundation had most of the programs and spent most of the funds. Officials also reported that evaluations were completed or under way for about half of the programs. While the total numbers of students, graduates, and employees in STEM fields increased, changes in the numbers and percentages of women, minorities, and international students varied during the periods reviewed. From academic year 1995-1996 to 2003-2004, the percentage of students in STEM fields increased from 21 to 23 percent. Changes in the percentages of domestic minority students varied by group. From academic year 1994-1995 to 2002-2003, the number of graduates in STEM fields increased 8 percent, but this was less than the 30 percent increase in graduates in non-STEM fields. International graduates continued to earn about one-third or more of the advanced degrees in three STEM fields. Between calendar years 1994 and 2003, employment in STEM fields increased 23 percent compared to 17 percent in non-STEM fields, and there was no statistically significant change in the percentage of women employees. Educators and others cited several factors that affected students' decisions about pursuing STEM degrees and occupations, and made suggestions to encourage more participation. They said teacher quality at the kindergarten to 12th grades, the mathematics and science courses completed in high school, and a mentor, especially for women and minorities, influenced domestic students' decisions. Also, these sources said that opportunities outside the United States and the visa process affected international students' decisions. To encourage more participation in STEM fields, educators and others made several suggestions. But before adopting any of them, it is important to know the extent to which existing STEM education programs are appropriately targeted and making the best use of available federal resources.
WMATA was created in 1967 by an interstate compact that resulted from the enactment of identical legislation by Virginia, Maryland, and the District of Columbia, with the concurrence of the U.S. Congress. WMATA began building its Metrorail system in 1969, acquired four regional bus systems in 1973, and began the first phase of Metrorail operations in 1976. In January 2001, WMATA completed the originally planned 103-mile Metrorail system, which included 83 rail stations on five rail lines. The transit system encompasses (1) the Metrorail subway system, which now has 86 Metrorail stations on five rail lines and a fleet of about 946 rail cars; (2) the Metrobus system, which has a fleet of about 1,447 buses serving 350 routes; and (3) the MetroAccess ADA complementary paratransit system, which provides specialized transportation services, as required by law, to persons with disabilities who are certified as being unable to access WMATA’s fixed-route transit system. Congress and the executive branch have supported considerable federal funding for WMATA since its inception in the 1960s, citing several reasons including (1) the federal government’s large presence in the area, (2) the attraction of the nation’s capital for tourists, (3) the overlapping needs of adjacent jurisdictions, and (4) the limitations faced in raising other revenue for transit needs. This federal funding has taken several forms over the years. First, WMATA relied on federal funding to pay for nearly 70 percent of the costs to build its Metrorail subway system. From 1969 through 1999, the federal government provided about $6.9 billion of the approximately $10 billion that WMATA spent to construct the original 103- mile system, according to WMATA officials. Second, WMATA has also relied on federal funding to cover more than 40 percent of its capital improvement costs during the last 10 fiscal years. Of about $3.5 billion that WMATA received from all sources for capital improvements during fiscal years 1995 through 2005 (as of February 2005), about $1.5 billion, or about 43 percent, came from the federal government, with the remaining $2 billion, or about 57 percent, coming from the state and local jurisdictions that WMATA serves and from other sources. Most of this federal funding has come through grants administered by FTA. Finally, WMATA received about $49.9 million for congressionally designated projects, including a new Metrorail station at New York Avenue in the District of Columbia, during fiscal years 1995 through 2005. WMATA operates in a complex environment, with many organizations influencing its decision-making and funding and providing oversight. WMATA is governed by a board of directors—composed of individuals appointed by each of the local jurisdictions WMATA serves—which sets policies and oversees all of WMATA’s activities, including budgeting, operations, development, expansion, safety, procurement, and other activities. In addition, a number of local, regional, and federal organizations affect WMATA’s decision-making, including (1) state and local governments, which subject WMATA to a range of laws and requirements; (2) the National Capital Region Transportation Planning Board of the Metropolitan Washington Council of Governments, which develops the short- and long-range plans and programs that guide WMATA’s capital investments; (3) FTA, which provides oversight of WMATA’s compliance with federal requirements; (4) the National Transportation Safety Board, which investigates accidents on transit systems as well as other transportation modes; and (5) the Tri-State Oversight Committee, which oversees WMATA’s safety activities and conducts safety reviews. WMATA’s combined rail and bus ridership totaled about 343.8 million passenger trips in fiscal year 2005. WMATA operates the second largest heavy rail transit system and the fifth largest bus system in the United States, based on passenger trips, according to WMATA. WMATA’s fiscal year 2005 budget is $1.29 billion. Of the total amount, about 76 percent, or $977.9 million, is for operations, including maintenance activities, and the remaining 24 percent, or $314.1 million, is for capital improvements. WMATA obtains its funding from a variety of sources, including the federal, state (Virginia and Maryland), District of Columbia, and local governments; passenger fares; and other sources. In general, WMATA relies on passenger fares and subsidies from its member jurisdictions to cover the majority of its operating costs. Its capital funds are obtained from other sources, including the federal government and the state and local jurisdictions that it serves. Of all WMATA’s funding, less than 2 percent is from a dedicated source. As the major transit agency in the national capital area, WMATA provides transportation to and from work for a substantial portion of the federal workforce and is also integral to the smooth transportation of visitors to the nation’s capital. WMATA also assists federal law enforcement agencies by providing security for high-profile events and other security-related expertise and services. Furthermore, the emergency transportation plans of the District of Columbia and the Washington region both rely heavily on Metrorail and Metrobus for transportation in an emergency scenario requiring evacuation. According to estimates prepared by WMATA, a substantial share of Metrorail’s riders, particularly at peak commuting periods, are federal employees. Using data from its 2002 passenger survey (the most recent data available), WMATA estimates that approximately 35 percent of all Metrorail riders were federal employees in 2002. WMATA’s estimates are higher for peak period times, when the system faces capacity constraints: according to the survey, approximately 41 percent of the morning peak period riders and approximately 37 percent of the afternoon peak period riders are federal employees. The federal employees who ride Metrorail to and from work each day represent a substantial share of federal employees in the Washington, D.C., region. Using an estimate based on its 2002 passenger survey data on the number of federal employees who are Metrorail passengers, together with data from OPM on the number of civilian federal employees in the Washington, D.C., region, WMATA estimated that in 2002, approximately 40 percent of federal employees used Metrorail. WMATA’s operating status is an important factor in OPM’s decisions about the day-to-day operations of the federal government. OPM officials told us that WMATA is a key stakeholder in OPM’s decision to have an early dismissal, late arrival, or closure of the federal government, since a substantial portion of the federal workforce rides WMATA’s transit system to and from work. Those officials said that they are aware of WMATA’s operating constraints and take them into account when deciding to close the federal government. However, the officials told us that OPM makes the final decision and uses the safety of employees as the sole factor in its decision. OPM officials further noted that the functioning of the federal government is not dependent on WMATA’s operating status and that employees have other options, such as flexible work schedules and teleworking, available should they not be able to get to their usual workplace. Executive Order 12072, issued on August 16, 1978, instructs federal agencies to consider such factors as the availability of public transportation and parking as well as accessibility to the public when evaluating and selecting federal facilities. The General Services Administration (GSA)—which has overall responsibility for reviewing and approving the acquisition of federal facilities—created a Site Selection Guide for federal agencies that implements the provisions of this executive order, as well as other public laws and executive orders. Within the National Capital Region, the National Capital Planning Commission also has review and approval authority over federal building construction, renovations, and transportation plans in the District of Columbia, and it has review authority only over federal sites in the Virginia and Maryland areas of the region. Both GSA and the commission instruct federal agencies to locate their facilities near mass transit stops whenever possible. The Federal Employees Clean Air Incentives Act of 1993 also encourages the federal use of mass transit, with specific provisions for the National Capital Region. The purpose of this act was to authorize agencies to create programs for federal employees to encourage their use of alternatives to single-occupancy vehicles for commuting. Under the act, the heads of agencies were authorized to establish programs for agency employees that would provide, for example, transit passes, space for bicycles, and nonmonetary incentives. WMATA’s services are integral to the smooth operation of the myriad of special activities that occur in Washington, D.C., as the nation’s capital and its “seat of government.” According to a visitor transportation survey administered for the National Park Service, 61 percent of visitors used Metrorail during their visit to Washington, D.C. In several instances, ridership has been highest on days when events (1) were sponsored by the federal government, such as the first and second inaugurations of President George W. Bush and the grand opening of the National Museum of the American Indian or (2) occurred in Washington because it is the seat of government, such as political rallies. On June 6, 2004, the date of former President Ronald Reagan’s state funeral ceremony, WMATA marked its highest ridership day ever, with more than 850,000 riders. The federal government also relies on WMATA to provide transportation services outside its normal hours and routes. Some examples follow: In May 2004, WMATA, along with other regional transit agencies, provided buses to shuttle attendees from Metrorail stations to the World War II dedication ceremony on the National Mall. Metrobuses ran overnight between RFK Stadium and the U.S. Capitol for 2 nights in June 2004 to enable people to pay respects to former President Ronald Reagan. On Inauguration Day, in January 2005, WMATA opened Metro 2 hours early and closed it 3 hours later than normal, at the request of the Presidential Inaugural Committee. WMATA’s Metro Transit Police supports the U.S. Secret Service by making available its officers who have expertise in areas such as explosives detection and civil disturbance management to help ensure a safe and secure environment before and during events involving the President, the Vice President, or high-level foreign dignitaries. For example, when events are held in venues located above Metrorail stations, Metro Transit Police’s explosive ordnance detection team inspects the stations to ensure they are free from explosives. The Metro Transit Police deployed its civil disturbance team at the 2005 presidential inaugural parade at the request of the Secret Service, which had received specific intelligence that protestors might attempt to breach the parade route. The Metro Transit Police received $299,371 in Department of Homeland Security (DHS) Urban Area Security Initiative (UASI) grants for overtime associated with providing security for the 2005 presidential inauguration. In commenting on the importance of the Metro Transit Police’s security expertise, Secret Service officials told us that they consider the Metro Transit Police to be a full law enforcement partner, along with the District of Columbia’s Metropolitan Police Department, the U.S. Capitol Police, and the U.S. Park Police. The Metro Transit Police also provides enhanced security throughout the Metrorail and Metrobus system when DHS raises the threat level, which is communicated through the Homeland Security Advisory System. Since DHS implemented the color-coded system in March 2002, the Metro Transit Police has spent about $2.7 million on overtime related to increased threat levels, for such activities as increasing patrols of Metrorail stations, trains, and buses. WMATA received $632,356 through a DHS UASI grant for overtime costs in 2004; this grant was WMATA’s first reimbursement for costs associated with increased threat levels, according to a Metro Transit Police official. WMATA also supports federal law enforcement efforts by providing Metrobuses to the U.S. Capitol Police to establish security perimeters, block intersections, and reroute traffic for events that take place on the grounds of the U.S. Capitol, such as presidential inaugurations and State of the Union addresses, and at other locations where presidential and vice presidential events occur. The Secret Service also uses Metrobuses periodically to establish temporary security perimeters; for example, it did so along the 2005 presidential inauguration parade route. The law enforcement agencies that use Metrobuses are charged the same standard charter rate that WMATA charges all parties to rent its Metrobuses for special events. WMATA supports homeland security efforts for the Washington region and the federal government through a variety of efforts. It provides training for local and federal first responders at its tunnel training facility and has deployed early-warning systems to detect chemical and radioactive contamination in some of its underground Metrorail stations. WMATA’s infrastructure is key to emergency evacuation of the region, including the evacuation of workers in federal buildings concentrated in downtown Washington, D.C. WMATA’s emergency response training facility in Landover, Maryland, provides a realistic setting for fire, police, emergency, and transit personnel to learn how to respond to events such as collisions, fires, and weapons of mass destruction incidents that occur in a transit or tunnel environment. The facility includes a 260-foot tunnel that houses two subway cars positioned to resemble a wreck, as well as simulated electrified third rail, cabling, and lighting that appear identical to those in a real tunnel. Emergency personnel from across the region train at the center. The training center’s federal clients include the Federal Bureau of Investigation’s Hostage Rescue Team, the Federal Protective Services, and the U.S. Marines’ Chemical-Biological Incident Response Force. Additionally, according to WMATA officials, FTA’s Transportation Safety Institute plans to use the Emergency Response Training Facility as a host site for the counterterrorism training it plans to provide to transit agencies’ law enforcement and safety personnel. WMATA funds this training facility entirely out of its regular operations budget. WMATA is also introducing a training course on managing Metrorail emergencies, which will address emergency management concepts, techniques to respond to weapons of mass destruction attacks, and emergency traffic control. The course, which WMATA is funding with a $335,261 DHS UASI grant, will be available to first responders from the region, transit agencies nationwide, and FTA. Metrorail is equipped with a permanent chemical detection system to help detect hazardous substances in selected stations in the Metrorail system. This system, known as the Program for Response Options and Technology (PROTECT), acts as an early warning to safeguard first responders, employees, and Metrorail customers and is installed in selected locations in underground Metrorail stations. WMATA had assistance from the U.S. Departments of Transportation, Energy, and Justice in developing the sensor system. It received $15 million in federally appropriated funds in fiscal year 2002 and $1.4 million in additional funds in fiscal year 2004 through a direct grant from DHS’s Office of Domestic Preparedness to pay for the installation of the sensors. Additionally, Metro Transit Police has distributed pager-sized devices to about 100 officers to wear in the Metrorail system to detect radiation. According to the Metro Transit Police, these pagers are worn mostly by officers in the downtown core because this area is considered to be at higher risk for attack. WMATA paid for about half of the radiological pagers, and the Department of Energy furnished the remainder. These early warning devices are important to the area’s first responders because if a high reading of a chemical or radioactive substance is detected, it is considered a potential hazardous materials or “hazmat” incident. In such an event, the portion of the Metrorail system involved could be temporarily closed, affecting traffic in the area, and local emergency management agencies would be notified and become responsible for coordinating any additional response. The local emergency response officials we interviewed generally prefer using Metrorail and Metrobus in an emergency scenario that requires evacuation because mass transit can move large numbers of people efficiently and help keep roadways clear for first responders and other emergency vehicles. To assist in coordinating evacuation planning across jurisdictions, the region’s metropolitan planning organization, the Metropolitan Washington Council of Governments, has developed guidance on emergency evacuation that includes the use of Metrorail and regular Metrobus routes as well as Metrobuses on special evacuation routes. The District of Columbia’s emergency evacuation plans also rely heavily on WMATA. Additionally, because the federal presence in the District is so large, the District Department of Transportation consulted with federal agencies in developing its emergency transportation plans. Over the years, WMATA has faced funding challenges, and options have been proposed to address them. Although WMATA has taken steps to improve its management, such as prioritizing its planned capital improvements, it lacks a dedicated funding source and must rely on variable, sometimes insufficient contributions from local, regional, and federal organizations to pay for its planned capital improvements. A report published by a regional funding panel estimated that, over the next 10 years, under its current revenue structure, WMATA will face a $2.4 billion budget shortfall, due largely to expenditures planned for capital improvement projects—an estimate that may not fully reflect the magnitude of the anticipated budget shortfall. Proposed options would provide a dedicated funding source, such as a local sales tax, and would increase federal funding for capital improvements. WMATA and others have projected continuing shortfalls in its capital and, to some extent, its operating budgets. For example, in 2001, we reported that WMATA faced uncertainties in obtaining funding for planned capital spending for two of its capital programs, discussed below, the Infrastructure Renewal Program (IRP) and the System Access and Capacity Program (SAP). At that time, WMATA anticipated a shortfall of $3.7 billion in the funding for these programs over the 25-year period from fiscal year 2001 through fiscal year 2025. Since that time, in response to recommendations that we and others made, WMATA created a strategic plan, which it issued in October 2002. In November 2002, it documented and prioritized its planned capital projects in a 10-year capital improvement plan that called for spending $12.2 billion over the period from fiscal year 2004 through fiscal year 2013. Then, in September 2003, WMATA launched a campaign called “Metro Matters” to obtain $1.5 billion in capital funding over a 6-year period to avert what WMATA believed was a crisis in its ability to sustain service levels and system reliability and to meet future demands for service. In response, WMATA and its member jurisdictions approved a $3.3 billion funding plan for fiscal years 2005 through 2010 to help pay for WMATA’s most pressing short-term capital investment priorities. As concerns about WMATA’s anticipated funding shortfall grew, a regional funding panel known as the Metro Funding Panel—cosponsored by the Metropolitan Washington Council of Governments, the Greater Washington Board of Trade, and the Federal City Council—was convened in September 2004 to study the magnitude of the shortfall, identify sources of funding, and evaluate options for generating additional revenues to address that shortfall. The panel estimated that under its current revenue structure, WMATA would have a total funding shortfall of about $2.4 billion for fiscal years 2006 through 2015 for maintaining and upgrading its existing system, assuming that Metro Matters was fully funded. As shown in table 1, the panel attributed nearly 80 percent of the total estimated shortfall of $2.4 billion to WMATA’s capital activities (IRP and SAP) and the remainder to operations activities associated with future capital projects as they are completed. Funding for the following projects and activities is included in the shortfall estimate: IRP projects: The IRP projects occur in fiscal year 2011 through 2013, after the Metro Matters funding agreement expires. These projects, which provide ongoing maintenance and renewal of the Metrorail and Metrobus systems, include replacing and rehabilitating buses and rail cars, rehabilitating escalators and elevators, rehabilitating Metrorail stations and parking lots, renovating rail car and bus maintenance facilities, and rehabilitating electrical systems, among other things. SAP projects: These projects, which are intended to increase the capacity of the current Metrorail and Metrobus systems to handle increased passenger levels, include the purchase of 130 new rail cars and 275 new buses; a variety of improvements to four maintenance facilities, two storage facilities, two new bus garages, and one replacement bus garage; enhancements at Metro Center, Union Station, and Gallery Place Metrorail stations; the construction of pedestrian connections between two pairs of Metrorail stations (between Farragut North and Farragut West and between Metro Center and Gallery Place); and 140 miles of bus corridor improvements, such as signal priority for buses, route delineation techniques using pavement materials and painted markings, and passenger waiting area enhancements. Operating activities: Finally, the panel included a relatively small portion of WMATA’s operating budget in the shortfall estimate. This portion consists of some additional operating costs associated with some of the capital projects. According to WMATA, these are mostly preventative maintenance projects, such as bus engine overhauls, bus tire replacements, bus parts, rail parts, and labor costs. Appropriately, the panel’s budgetary shortfall estimate did not include the portion of WMATA’s capital improvement plan that involves expanding the system—by adding new rail lines, for example. The projects in this portion of the plan, known as the System Expansion Program, are estimated to cost roughly $6 billion. WMATA officials told us that these projects would be paid for by the local jurisdictions and businesses where they would be built, as well as by federal grants for new transit expansion. In preparing its estimate of WMATA’s budgetary shortfall, the panel did not evaluate the need for, or priority of, individual projects in SAP and IRP. Likewise, we did not independently assess the suitability of including these projects, as a whole or individually, in the shortfall estimate. However, when WMATA developed its 10-year capital improvement plan in 2002, the projects were approved by its board of directors, which includes representatives from all of WMATA’s member jurisdictions. In addition, the IRP projects and some of the projects in SAP have been incorporated into the region’s Constrained Long-Range Plan for transportation improvements over the next 20 years by the Transportation Planning Board of the Metropolitan Washington Council of Governments. In estimating WMATA’s budgetary shortfall, the panel did not include a major cost category and, thus, may have significantly underestimated the shortfall. The panel did not include the costs of providing paratransit services as required under ADA. Compliance with the act’s requirements may result in significant costs over the next 10 years. The panel recognized that including these costs, which are included in WMATA’s operating budget, would result in a greater budgetary shortfall. In fact, the panel estimated the shortfall from MetroAccess, WMATA’s paratransit system, at about $1.1 billion over the 10-year period from 2006 through 2015, thus raising the total anticipated shortfall to $3.5 billion for that period. However, the panel stated that funding for these services should be provided through a creative packaging of social service, medical, and other nontransportation resources in the region, rather than by WMATA. We believe that any estimate of WMATA’s funding shortfall should include the costs associated with MetroAccess because WMATA is required by ADA to provide paratransit services. In our 2001 report and testimony, we noted that WMATA’s funding comes from a variety of federal, state, and local sources, but that unlike most other major transit systems, WMATA does not have a dedicated source of nonfarebox revenue, such as a local sales tax, whose receipts are automatically directed to the transit authority. As far back as April 1979, we reported on concerns about the lack of a revenue source dedicated to pay the costs of mass transportation for the Washington region. Concerns about WMATA’s lack of dedicated revenues surfaced again in reports issued by the Brookings Institution in June 2004 and by the Metro Funding Panel in January 2005. According to the Brookings report, WMATA’s lack of dedicated revenues makes WMATA’s core funding uniquely vulnerable and at risk as WMATA’s member jurisdictions struggle with their own fiscal difficulties. The Brookings report and the Metro Funding panel report both state that the Washington region needs to develop a dedicated source of revenue, and they evaluate the advantages and disadvantages of a menu of revenue options that could support the dedicated revenue source—specifically, gasoline taxes, sales taxes, congestion charges, parking taxes, land-value capture, and payroll taxes. Observing that WMATA has provided numerous benefits both to the Washington region and the federal government over the years, the Metro Funding Panel also concluded that WMATA will require a commitment of new revenue sources to sustain those benefits. Accordingly, the panel recommended, among other things, that (1) WMATA’s compact jurisdictions of Virginia, Maryland, and the District of Columbia mutually create and implement a single regional dedicated revenue source to address WMATA’s budgetary shortfalls and (2) the federal government participate “significantly” in addressing WMATA’s budgetary shortfalls, particularly for capital maintenance and system enhancement. In the current situation of large budget deficits, any additional federal funding for WMATA would need to be considered along with the many other competing claims for federal resources. To the extent that the federal government cannot provide significant additional support to WMATA, and WMATA’s current revenue structure continues to be insufficient to support its planned capital projects, WMATA may need to reassess its capital improvement plan to determine which projects could be undertaken within a more constrained funding level. WMATA also may need to consider how it will meet its obligations under ADA. WMATA is subject to oversight from multiple entities that have issued numerous reports on the agency since 2003. The scope of the reports varies and includes compliance reviews of specific statutory requirements, monthly assessments of major construction projects, and reviews of WMATA’s overall bus and rail operations. Specifically, WMATA’s Office of Auditor General has issued nearly 500 reports, including internal and investigative audits and reviews of contracts and pricing proposals. In addition, an independent external auditor, which reports to WMATA’s board of directors, annually reviews WMATA’s financial statements and related internal controls. FTA oversees WMATA’s major capital projects through its project management oversight program and assesses its compliance with a wide range of requirements through its Triennial Review process. In 2005, at WMATA’s request, transit industry panels conducted peer reviews of WMATA’s bus and rail operations. Details on these entities and the types of oversight they provide are presented in table 2. All of these entities included recommendations in their reports, and, in general, WMATA implemented them or has plans to implement them. As part of our ongoing work, we plan to analyze these reviews in greater detail, together with other specialized FTA reviews and safety reviews conducted by external and internal entities. WMATA’s Auditor General is responsible for planning and implementing operational, financial, and information system audits, as well as for carrying out investigations to prevent or detect mismanagement, waste, fraud, or abuse. The Office of Auditor General also conducts audits of contracts to ensure they are being done in accordance with WMATA policy and cost-effectively. The Auditor General reports directly to the General Manager/Chief Executive Officer and briefs the audit committee of the board of directors quarterly. The Auditor General prepares an annual audit plan that covers most aspects of the agency. When deficiencies in a program are found, the Office of Auditor General makes recommendations for corrective actions to be taken and follows up on the implementation status of recommendations with the executive manager responsible for the program or office to which the recommendations were directed. If the recommendations are not implemented in a timely fashion, the Chief Executive’s office may intervene to ensure that appropriate corrective action is taken. For the most part, WMATA management implements these recommendations. The following are examples of audit reports issued by the Office of Auditor General in recent years: Contract/Procurement Oversight. Since January 2004, the Office of Auditor General has issued five internal audit reports on contracting processes and the documentation of contracting activities. Recommendations were made to improve the documentation process, improve the administration of the cost-estimating process, and develop procedures to document the cost-estimating process. Information Technology (IT) Renewal Program. The IT Renewal Program is a multiyear, multimillion-dollar initiative to renew WMATA’s IT systems for the next generation of service. The Office of Auditor General has issued six reports during the past 3 years on the implementation of this program, with suggestions for improving communication and ensuring that appropriate security measures are in place. Audit of Cell Phone Usage. This review of employee cell phone plans and usage made recommendations for more efficient and effective cell phone use, which resulted in potential savings of approximately $300,000 per year. Additional recommendations were made to improve the administration of the cell phone program. WMATA is subject to federal financial reporting requirements under the Single Audit Act as amended. Under this act, nonfederal entities that expend more than specified amounts of federal awards (currently $500,000) are subject to either a single audit or a program-specific audit, which must be performed by an independent external auditor in accordance with generally accepted government auditing standards. The purpose of the Single Audit Act was to streamline and improve the effectiveness of audits of federal awards and to reduce the audit burden on states, local governments, and nonprofit entities receiving federal awards by replacing multiple grant audits with one audit of a recipient as a whole (or, for entities receiving federal awards under one program, an optional audit of that program only). In conducting WMATA’s annual audits under the act’s requirements, an independent auditor is required to (1) provide an opinion on WMATA’s financial statements and the Schedule of Expenditures of Federal Awards, (2) report on WMATA’s internal controls related to the financial statements and major programs, and (3) report on WMATA’s compliance with laws and regulations that could have a material effect on WMATA’s financial statements and major federal programs. For fiscal years 2003 and 2004, WMATA’s independent external auditor found no reportable conditions or material weaknesses in WMATA’s internal controls over financial reporting and the major programs receiving federal assistance. The independent auditor’s reviews of WMATA’s financial statements and internal controls did, however, note several areas of noncompliance related to requirements for grants for both years. When such areas of noncompliance are found, the auditor recommends steps for WMATA to take to correct the noncompliance. WMATA generally concurred with the auditor’s recommendations and agreed to implement them. The following are examples of noncompliance and recommendations for corrective action found at WMATA during fiscal years 2003 and 2004: Property records for equipment purchased with a federal grant did not include serial numbers or prices for the equipment—as required by federal law. The auditor recommended that WMATA revise the records to include the required information, and WMATA agreed to do so. WMATA did not correctly submit federal grant expenditure status reports. The auditor recommended that WMATA revise and resubmit its financial status reports to include total expenditures, which WMATA agreed to do. FTA oversees the progress of WMATA’s major capital projects through the project management oversight (PMO) program, which we discuss in greater detail later in this statement. To receive financial assistance, FTA’s grantees must develop and implement a project management plan that address each project’s scheduling, budget, performance, and other issues. FTA retains engineering firms to review and recommend approval of the plans, monitor the progress of each project against its plan, and issue monthly monitoring reports. The purpose of the monthly PMO monitoring reports is to determine whether the projects are proceeding in accordance with the terms of the federal grant agreements, including whether they are meeting standard project management requirements, such as having a project management plan and a quality assurance plan, meeting schedule milestones, and being on budget. WMATA’s major capital projects that are subject to PMO review collectively represent a substantial portion of WMATA’s capital budget. We reviewed PMO reports that were issued from January 2003 through May 2005. During that time, WMATA had seven capital infrastructure projects that were subject to the requirements of the PMO program, including IRP, which, as discussed earlier, provides ongoing maintenance and renewal of the Metrorail and Metrobus systems; the rail car procurement program; and the construction of the New York Avenue Metrorail station. The total cost of the projects under review was about $5 billion, according to data provided by WMATA. The monthly PMO monitoring reports that we reviewed identified concerns and recommended corrective actions for each of WMATA’s major projects under review. The concerns most commonly cited in the reports were related to schedules, project management plans, and quality assurance activities. Details on these concerns—which WMATA has taken steps to address—follow: Schedules. The reports cited concerns pertaining to schedules for some of the contracts within three of WMATA’s projects. For the New York Avenue Metrorail station and the Largo Metrorail extension, the reports stated that individual components of the projects were behind schedule; however, the two projects—as a whole—were both completed ahead of schedule. The PMO reports also found that components of the rail car procurement program, including the rehabilitation of the 2000/3000 Series rail cars and the delivery of new 5000 Series rail cars, were behind schedule. Project management plans. The reports stated that WMATA needed to submit or update project management plans for three of its projects—the rail car procurement program, Metro Matters, and the Infrastructure Renewal Program. Quality assurance activities. The reports stated that procedures related to quality assurance required updating for three projects: Dulles Corridor rapid transit, the Largo Metrorail extension, and the Branch Avenue storage and maintenance yard. Some examples of quality assurance activities include having (1) written procedures that describe how to conduct reviews of contractor’s quality programs and (2) quality control coordination meetings with contractors. At least every 3 years, FTA is required to review and evaluate transit agencies receiving funds under its Urbanized Area Formula Grant program. The reviews focus on compliance with statutory and administrative requirements in 23 areas, and if grantees are found not to be in compliance, their funding can be reduced or eliminated. In 2002, FTA found that WMATA was deficient in the following three areas: Technical. Grantees must implement the Urbanized Area Formula Grant Program of Projects in accordance with the grant application master agreement. WMATA had not been updating the milestones in its Milestone Progress Reports, nor had WMATA been reporting all required information for its Job Access and Reverse Commute grants. Buy America. Certain products used in FTA-funded projects must be produced in the United States. WMATA’s procurement files for buses and rail cars did not include required certifications indicating that these procurements complied with Buy America requirements. Half-fare. Grantees must offer reduced fares to elderly or disabled riders or to those who present a Medicare card. WMATA’s system maps specified the base fare but did not indicate that a half-fare was available. FTA made recommendations for addressing the specific areas of noncompliance; WMATA implemented the recommendations, and the findings were closed in 2004. The American Public Transportation Association (APTA) offers peer reviews as a service to transit agencies to help enhance the efficiency and effectiveness of their operations. At the request of transit agencies, the association convenes panels of experts from within the transit industry, who travel to the transit agency under review to physically tour the operations, meet with staff and senior management, and review documentation in order to develop findings and recommendations on the transit agency’s operations. Following the site visit, the peer review panel issues a written report to the transit agency under review. At WMATA’s own request, APTA conducted peer reviews on WMATA’s bus and rail operations earlier this year, and WMATA is currently considering its response to the recommendations made in the peer review reports. The peer review panels developed recommendations to improve the effectiveness and efficiency of bus and rail operations in multiple areas, including staffing, organization, maintenance and technology. For example: Findings and recommendations in the rail peer review report focused on the selection, training, and certification of employees, with recommendations on improving training for track and train employees and implementing a new reporting structure for the training department; operations, with recommendations on increasing reliance on line supervisors in dealing with in-service problems and restructuring the current organization to create distinct line ownership functions and responsibilities; and track maintenance, with recommendations on recertifying track walkers annually and increasing the number of track walkers to reduce the daily inspection distance to industry standards. Findings and recommendations of the bus peer review report focused on operations and service, with recommendations for increased street supervision and re-evaluation of bus route service; facility maintenance, with recommendations on consolidating bus shop maintenance and improving follow-up procedures for bus defects; staffing and training, with recommendations on eliminating high vacancy rates and improving training; and safety, with recommendations on adhering to basic safety programs and enforcing personal protective equipment policies. As part of our ongoing work, we plan to analyze these reviews in greater detail to determine whether, taken as a whole, they point to any systemic problems and are sufficiently comprehensive to identify and address overall management and operational challenges. We will also broaden the scope of our analysis to include additional oversight reviews; specifically, we plan to analyze FTA’s in-depth reviews of program or system compliance. These include, for example, financial management oversight reviews, which assess grantees’ financial management systems and internal controls; procurement system reviews, which evaluate grantees’ compliance with federal procurement requirements; and drug and alcohol oversight reviews, which assess grantees’ compliance with FTA’s regulations on substance abuse management programs and drug and alcohol testing for transit employees. We also plan to review safety audits of WMATA that were conducted by internal and external entities, including the following: WMATA’s Office of System Safety and Risk Protection. This office, which reports to the Department of Audit and Safety Oversight, performs internal safety reviews of WMATA’s operations. Tri-State Oversight Committee. This committee, which is the designated state safety oversight agency for WMATA, requires WMATA to develop and implement system safety and security program plans, report accidents and unacceptable hazard conditions, and conduct safety reviews. The committee meets with WMATA quarterly to discuss safety issues and has the authority to mandate corrective action. APTA. APTA’s bus and rail safety audits review the adequacy of transit agencies’ system safety program plans and the extent to which the plans have been implemented. FTA. FTA performs audits of the Tri-State Oversight Committee to determine whether the state oversight agency is carrying out its safety oversight program and to examine ways in which the overall program can be improved. National Transportation Safety Board (NTSB). NTSB has the authority to conduct investigations of accidents and make recommendations. The NTSB is currently investigating a November 2004 crash involving two Metrorail trains; it expects to issue a report on the results of this investigation in the fall of 2005. In addition, we plan to review the role of WMATA’s board of directors in providing oversight of WMATA’s management and operations. As noted earlier in this statement, WMATA is governed by a board of directors— composed of individuals appointed by each of the local jurisdictions WMATA serves—which sets policies and oversees all of WMATA’s activities, including budgeting, operations, development, expansion, safety, procurement, and other activities. To control costs and ensure results—especially for high-cost transportation infrastructure projects—Congress, the administration, and GAO have long recognized the importance of instituting spending safeguards and management oversight for the state and local governments and transportation agencies that receive federal funding. For example, certain federal policies have historically controlled the uses of federal transportation funds, prohibiting the use of these funds for operating expenses and requiring that the federal funds be matched to ensure the use of some local funds for capital infrastructure projects. In addition, a number of past, ongoing, and planned federal and local efforts provide insight into the benefits of management oversight and how it can be carried out. For example, in the 1980s, state legislation enhanced opportunities for New York City’s ailing Metropolitan Transit Authority to generate additional revenue while providing increased oversight to ensure accountability. Furthermore, FTA’s PMO program is designed to help ensure that grantees building major capital projects have the qualified staff and procedures needed to successfully plan and carry out those projects. We have also reported that safeguards should accompany any increased federal funds provided to the District of Columbia to address the structural imbalance between its costs and revenue-raising capacity. Finally, the surface transportation reauthorization bills currently before Congress include provisions to enhance management oversight controls for projects receiving federal funds, including establishing a new program to monitor the use of federal highway funds. Although we have not evaluated the application of these oversight mechanisms to WMATA, we believe they provide a number of options for Congress to consider as it weighs the question of providing additional federal funding to WMATA. The federal government has generally discouraged federal transit grants from being used to fund transit operating expenses, although policy in this area has shifted over time. Landmark legislation in 1964 established a program of federal capital expenditure grants to state and local governments. At that time, no grant money could be used for operating expenses because of concerns that such grants would discourage efficient operations of transit agencies and might even have the perverse effect of rewarding inefficient operations with funding assistance. However, that act was amended in 1974 to authorize federal subsidies to pay transit operating expenses, reflecting the alternative concern that limiting federal assistance to capital grants created incentives for local governments to inefficiently waste capital, such as by prematurely replacing buses. During the 1990s, views on how federal transit grants could be used shifted again, and limits were placed on the total amount of transit formula grants that could be used for operating expenses. In 1998, with the passage of the Transportation Equity Act for the 21st Century (TEA-21), transit agencies serving urban populations of 200,000 or more could no longer use funding from FTA’s Urbanized Area Formula Grants for operating expenses. According to FTA officials, this prohibition was instituted in part because federal policymakers believed that the federal government should pay only for the construction and maintenance of mass transit systems, not for their operation. However, TEA-21 did allow capital funds to be used for preventive maintenance, which included routine maintenance on rail cars and buses—activities that were previously classified as operations activities. After the events of September 11, 2001, we recommended a legislative exception to the prohibition on operations funding that would allow transit agencies to use Urbanized Area Formula Grants for security-related operating expenses. Transit agencies can spend 1 percent of formula funds on security-related operating expenses. The federal government has also historically used matching requirements in its transit and other transportation programs to stimulate local investment in transportation infrastructure and equipment. Currently, major capital transit investment programs—including the New Starts and Rail and Fixed Guideway Modernization programs—provide grants that fund up to 80 percent of a project’s total costs while requiring a local match of at least 20 percent. During the late 1970s and early 1980s, the New York State Metropolitan Transit Authority (MTA), which includes New York City Transit’s subway and bus systems and the Long Island Rail Road, was in a state of fiscal crisis and operational decay. To help salvage the system, the state legislature passed legislation that provided MTA with the flexibility to generate additional revenue—through issuing bonds and notes and through the creation of a special tax district—needed to rebuild its aging infrastructure. The legislation also established several oversight bodies— which are still in place at MTA today—to help ensure that MTA’s funds would be well spent. They are as follows: The Metropolitan Transportation Capital Review Board. Appointed by the governor and composed of two members recommended by the New York State legislature and one each recommended by the governor and the mayor of New York City, this board reviews and approves, once every 5 years, MTA’s capital program plans for transit and railroad facilities. The plans include goals and objectives for capital spending, establish standards for service and operations, and include estimated costs and expected sources of revenue. The MTA Committee on Capital Program Oversight. This standing committee of MTA’s board of directors has various oversight responsibilities, including monitoring the (1) current and future availability of funds to be used in the capital program plans and (2) contract awards made by MTA. The committee issues quarterly reports on its activities and findings. The MTA Office of the Inspector General. This office was created as an independent oversight agency to investigate allegations of abuse, fraud, and deficiencies in the maintenance and operation of facilities. The Inspector General may also initiate other reviews of MTA’s operations and can recommend remedial actions to be taken by MTA and monitor their implementation. The Inspector General is appointed by the governor and submits annual reports of findings and recommendations to the governor. MTA is required to report quarterly to the Inspector General on the implementation status of all recommendations made in final reports. Since these oversight bodies were established, and with increased funding, MTA has improved its on-time performance and reliability. For example, the mean distance between failures has increased from less than 7,000 miles in 1981 to nearly 140,000 miles in 2003, according to MTA. FTA’s PMO program was established in the 1980s to safeguard the federal investment in major capital transit projects, which require large commitments of public resources, can be technically challenging, and often take years to construct. This program provides a continuous review and evaluation of the management of all major transit projects funded by FTA. Through provisions such as the following, the PMO program is designed to help ensure that grantees building major capital projects have the qualified staff and procedures needed to successfully build the projects: To receive federal financial assistance, grantees must develop and implement project management plans that address quality, scheduling, the budget, and other issues. Contractors monitor grantees’ projects to determine whether grantees are progressing on time, within budget, and according to approved plans and specifications. The contractors periodically report their findings and recommendations for any corrective actions that may be needed. In 2000, we reported and testified that FTA had improved the quality of the PMO program since the early 1990s, when we designated it as high risk because it was vulnerable to fraud, waste, abuse, and mismanagement. We concluded that the program had resulted in benefits for both grantees and FTA. Grantees have improved their controls over the cost, schedule, quality, and safety of their projects. FTA has gained a better understanding of the issues surrounding complex construction projects and an increased awareness of potential problems that could lead to schedule delays or cost increases. As contractors have brought cost and schedule issues to FTA’s attention, FTA has taken actions to help protect the federal investment and control projects’ costs and schedules. FTA officials told us that any additional federal funding provided to WMATA would be subject to the PMO program’s requirements only if those funds were distributed to WMATA through the U.S. Department of Transportation and FTA. Otherwise, WMATA’s spending from the additional funding would not likely be subject to any federal program oversight. In June 2004, we testified on the structural imbalance between the District of Columbia’s costs and revenue-raising capability, stating that if the federal government chooses to provide additional funding to the District to compensate for this imbalance, the government should implement safeguards to ensure that the funds are spent efficiently and effectively. In that testimony, we stated that such safeguards should be written into any legislation providing additional federal assistance to the District and could include the following: District officials should be required to report to Congress on how they plan to spend the federal assistance and regularly report on how it is being spent. Congress may consider further specifying the types of projects for which federal funds could be used or including a matching requirement to ensure that some local funds continue to be used for infrastructure and capital requirements. The House and Senate versions of the surface transportation reauthorization bill that are currently in conference committee contain provisions aimed at improving the financial integrity and project delivery times for surface transportation projects that receive federal financial assistance. For example: On the transit side, both the House and Senate versions of the bill would increase the amount of funds available to the Secretary of Transportation for management oversight of mass transportation construction projects receiving federal funds. The funds would be used to review and ensure compliance with federal requirements for project management. To support the need for such enhanced oversight, the committee report accompanying the House bill notes that comprehensive agency oversight, compliance review, and technical assistance are necessary for all major grant programs. On the highway side, both versions of the bill would require the Secretary of Transportation to establish an oversight program for the Federal-Aid Highway Program to promote the effective and efficient use of federal highway funds. As part of this new oversight program, the Federal Highway Administration (FHWA) would (1) review states’ financial management systems, (2) develop minimum standards for estimating project costs, and (3) evaluate state practices for awarding contracts and reducing project costs. In addition, highway projects receiving a certain amount of federal assistance—$500 million or more in the House bill and $1 billion or more in the Senate bill—would be subject to an increased level of FHWA oversight, including submitting a project management plan and an annual financial plan to FHWA documenting the project’s procedures for managing costs and schedules. WMATA’s service to the nation’s capital and its associated additional responsibilities need to be considered when determining whether a greater federal role in providing financial assistance to, and oversight of, WMATA is warranted. In the end, it is up to Congress to decide whether or in what form to provide WMATA with additional federal funding in recognition of its support of the federal government. In addition, if Congress decides to provide WMATA with the additional funding, it is important for there to be reasonable assurances that the funds will be spent efficiently and effectively. WMATA is already subject to oversight from multiple entities, but it is unclear whether this oversight is sufficient to provide such assurances. WMATA’s existing oversight could be supplemented by including safeguards in any legislation that provides additional federal funding. Our research has shown that a number of options are available for such safeguards, although we have not fully analyzed their applicability to WMATA or their relative merits. The options include the following: Require WMATA officials to report to Congress on how they plan to spend the federal assistance and regularly report on how it is being spent. For example, Congress could require officials to submit a plan to Congress on how they intend to spend the federal assistance—before any funds are obligated—and update this plan as circumstances or priorities change. Further specify the types of projects for which federal funds could be used or include a matching requirement to ensure that some local funds continue to be used for infrastructure and capital requirements. Require that any additional funding provided to WMATA be administered through DOT and FTA and therefore be subject to the PMO program. Institute additional oversight bodies for WMATA, either through or independent of its board of directors. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or the other Members of the Committee may have. For further information about this testimony, please contact me at (202) 512-2834 or siggerudk@gao.gov. Individuals making key contributions to this testimony include Seto Bagdoyan, Mark Bondo, Christine Bonham, Jay Cherlow, Elizabeth Eisenstadt, Edda Emmanuelli-Perez, Rita Grieco, Heather Halliwell, Maureen Luna-Long, Susan Michal-Smith, SaraAnn Moessbauer, Katie Schmidt, and Earl Christopher Woodard. To determine the Washington Metropolitan Area Transit Authority’s (WMATA) responsibilities for supporting the federal government, we interviewed a wide array of federal and local officials including those from WMATA, the Federal Transit Administration (FTA), the Office of Personnel Management, the General Services Administration, the National Capital Planning Commission, the Metropolitan Washington Council of Governments, the U.S. Secret Service, the U.S. Capitol Police, and the District of Columbia Department of Transportation. We reviewed federal guidance on employees’ use of, and the placement of federal buildings near, mass transit and local and federal emergency planning guidance. We also used WMATA’s estimates of federal Metrorail ridership based on its 2002 passenger survey. Through our review of the survey methodology, and use of other corroborating evidence, we determined that the ridership estimates were sufficiently reliable for our purposes. To determine the current funding challenges facing WMATA and the options proposed to address these challenges, we reviewed and analyzed the budgetary shortfall estimate prepared by the Metro Funding Panel, budget documents from WMATA, and prior GAO reports. We interviewed officials from WMATA and local transportation experts who served on the funding panel. To determine the entities that currently provide oversight of WMATA and the focus of their recent reviews, we interviewed WMATA officials and reviewed selected reports and audits that have been issued by WMATA’s oversight bodies since the beginning of calendar year 2003. Our review included the following: FTA’s Project Management Oversight (PMO) program contractor reports FTA’s most recent Triennial Review The independent external auditor’s review of WMATA’s financial statements and internal controls as required under the Single Audit Act The American Public Transportation Association’s peer review reports Although FTA carries out a number of reviews of transit agencies in addition to the Triennial Review and the PMO reports, we selected the Triennial Review because it covers grantees’ compliance with a wide range of statutory and administrative requirements, and we selected the PMO reports because this program provides oversight of WMATA’s major capital projects, which represent a significant part of WMATA’s budget. For this statement, we did not analyze any oversight entities or reports related to safety, such as those of the Tri-State Oversight Committee, the National Transportation Safety Board, or the American Public Transportation Association. We plan to address these, as well as FTA’s additional compliance reviews, as part of our ongoing work. To identify applicable examples of spending safeguards and management oversight of any additional federal assistance provided to WMATA, should Congress decide to provide such assistance, we reviewed prior GAO work on surface transportation funding and management oversight, as well as other documents on transportation planning and finance, and interviewed officials with expertise in the transit industry, transportation finance, and transportation planning. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In recent years, the Washington Metropolitan Area Transit Authority (WMATA) has faced serious financial and budgetary problems as well as continuing challenges related to the safety and reliability of its transit services. At the same time, ridership is at an all-time high, and WMATA continues to provide critical services and considerable benefits to the Washington region and to the federal government. This statement discusses (1) WMATA's responsibilities for serving the interests of the federal government, including the agency's role in transporting federal employees and visitors to the nation's capital and in supporting homeland security for the Washington metropolitan region; (2) the current funding challenges facing WMATA and the options proposed to address these challenges; (3) preliminary information on some of the entities that currently provide oversight of WMATA and the focus of their recent reviews; and (4) some considerations and options in instituting spending safeguards and oversight of any additional federal assistance provided to WMATA, should Congress decide to provide such assistance. GAO discussed this testimony with WMATA and FTA officials, who provided comments and additional information that GAO incorporated as appropriate. WMATA transports a substantial share of the federal workforce and provides an important means of transportation to special events that occur in Washington, D.C., as the nation's capital. WMATA's Metro Transit Police assists federal law enforcement agencies by providing expertise in civil disturbance management and explosives detection and by training first responders in emergency management techniques specific to transit environments. WMATA's Metrorail and Metrobus are the preferred means of transportation in an emergency scenario requiring evacuation, and both the regional and the District of Columbia emergency transportation plans rely heavily on them. A regional funding panel estimated WMATA's budgetary shortfall at $2.4 billion for fiscal years 2006 through 2015 if WMATA were to fund many of the projects in its 10-year capital improvement plan. This shortfall may be even greater because the panel's shortfall calculation did not include the costs of providing specialized transportation for persons with disabilities, as required under the Americans with Disabilities Act. To deal with WMATA's funding shortfall, the regional panel concluded that the region needs to develop a dedicated source of revenue for WMATA (e.g., local sales tax) and that the federal government needs to provide significant contributions because of the benefits it receives from WMATA. However, given the large federal budget deficit and competing claims on federal resources, GAO believes WMATA may also need to reexamine its own spending priorities. As part of its ongoing work on WMATA's oversight entities, GAO found that WMATA is subject to oversight from multiple entities that, since 2003, have issued hundreds of reports--which vary in scope--on a broad range of topics. These entities include WMATA's Auditor General, an independent external auditor, the Federal Transit Administration (FTA), and industry peer review panels. The entities have made recommendations to WMATA, which WMATA has generally implemented or plans to implement. As part of its ongoing work, GAO plans to analyze these reviews in more detail to determine if they comprehensively identify and address WMATA's overall management and operational challenges. GAO's ongoing work will also cover other FTA reviews and safety reviews of WMATA's operations. Congress, the administration, and GAO have long recognized the benefits of having spending safeguards and management oversight for entities that receive federal funding. If Congress decides to provide WMATA with additional federal funding, there needs to be reasonable assurance that the funds will be spent effectively. We identified several options for additional oversight that could be incorporated into legislation that provides additional federal funding to WMATA, including having WMATA officials periodically report to Congress on how the funding is being spent; specifying the types of projects for which federal funds could be used; and requiring that any additional federal funding be subject to FTA's oversight programs.
Current surface transportation programs do not effectively address the transportation challenges the nation faces. Collectively, post-interstate-era programs addressing highway, transit, and safety are an agglomeration that has been established over half a century without a well-defined vision of the national interest and federal role. Many surface transportation programs are not linked to performance of the transportation system or grantees, as most highway, transit, and safety funds are distributed through formulas that only indirectly relate to needs and may have no relationship to performance. In addition, the programs often do not use the best tools or best approaches, such as using more rigorous economic analysis to select projects. Finally, the fiscal sustainability of the numerous highway, transit, and safety programs funded by the Highway Trust Fund is in doubt, as a result of increased spending from the fund without commensurate increases in revenues. Since the Federal-Aid Highway Act of 1956 funded the modern federal highway program, the federal role in surface transportation has expanded to include broader goals, more programs, and a variety of program structures. Although most surface transportation funds remain dedicated to highway infrastructure, federal surface transportation programs have grown in number and complexity, incorporating additional transportation, environmental, and societal goals. While some of these goals have led to new grant programs in areas such as transit, highway safety, and motor carrier safety, others have led to additional procedural requirements for receiving federal aid, such as environmental review and transportation planning requirements. This expansion has also created a variety of grant structures and federal approaches for establishing priorities and distributing federal funds. Most highway infrastructure funds continue to be distributed to states in accordance with individual grant program formulas and eligibility requirements. However, broad program goals, eligibility requirements, and authority to transfer funds between highway programs give state and local governments broad discretion to allocate highway infrastructure funds according to their priorities. Although some transit formula grant programs also give grantees considerable discretion to allocate funds, a portion of transit assistance requires grantees to compete for funding based on specific criteria and goals. Similarly, basic safety formula grant programs are augmented by smaller programs that directly target federal funds to specific goals and actions using financial incentives and penalty provisions. We have found that many federal surface transportation programs are not effective at addressing key transportation challenges, such as increasing congestion and growing freight demand, because federal goals and roles are unclear, and many programs lack links to needs or performance. The goals of federal surface transportation programs are numerous and sometimes conflicting, which contributes to a corresponding lack of clarity in the federal role. For example, despite statutes and regulations that call for an intermodal approach (one that creates connections across modes), only one federal program is specifically directed at intermodal infrastructure. Most highway, transit, and safety grant funds are distributed through formulas that have only an indirect relationship to needs and many have no relationship to performance or outcomes. The largest safety grants are more likely than highway grants to be focused on goals rather than specific transportation systems such as the interstate system, and several highway safety and motor carrier safety grants allocate incentive funds on the basis of performance or state efforts to carry out specific safety- related activities. However, since the majority of surface transportation funds are distributed without regard to performance, it is difficult to assess the impact of recent record levels of federal highway expenditures. For example, while the condition of highways showed some improvement between 1997 and 2004, traffic congestion increased in the same period. Mechanisms to link programs to goals also appear insufficient because, particularly within the Federal-aid Highway program, federal rules for transferring funds between different highway infrastructure programs are flexible, weakening the distinctions between individual programs (see fig. 1). Surface transportation programs often do not employ the best tools and approaches to ensure effective investment decisions. Rigorous economic analysis does not generally drive the investment decisions of state and local governments—in a 2004 survey of state departments of transportation, 34 of 43 state departments of transportation cited political support and public opinion as very important factors, whereas 8 said the same of the ratio of benefits to costs. The federal government also does not possess adequate data to assess outcomes or implement performance measures. For example, the Department of Transportation (DOT) does not have a central source of data on congestion, even though it has identified congestion as a top priority. While some funds can be transferred between highway and transit programs, modally stovepiped funding nevertheless impedes efficient planning and project selection. Additionally, tools to make better use of existing infrastructure, such as intelligent transportation systems and congestion pricing, have not been deployed to their full potential. The solvency of the federal surface transportation program is at risk because expenditures now exceed revenues for the Highway Trust Fund, and projections indicate that the balance of the Highway Trust Fund will soon be exhausted. According to the Congressional Budget Office (CBO), the Highway Account will face a shortfall in 2009, the Transit Account in 2012. The rate of expenditures has affected its fiscal sustainability. As a result of the Transportation Equity Act for the 21st Century (TEA-21), Highway Trust Fund spending rose 40 percent from 1999 to 2003 and averaged $36.3 billion in contract authority per year. The upward trend in expenditures continued under the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU), which provided an average of $57.2 billion in contract authority per year. While expenditures from the trust fund have grown, revenues into the fund have not kept pace. The current fuel tax of 18.4 cents per gallon has been in place since 1993, and the buying power of the fixed cents-per-gallon amount has since been eroded by inflation. The reallocation to the Highway Trust Fund of 4.3 cents of federal fuel tax previously dedicated to deficit reduction provided an influx of funds beginning in 1997. However, this influx has been insufficient to sustain current spending levels. Furthermore, while federal funding for transportation has increased, the total funding for transportation may not increase to the same extent because federal funds may be substituted for state and local funds. Thus, added federal funds may not lead to a commensurate increase in the total investment in highways because state and local governments can shift nonfederal funds away from highways to other purposes. Increases in federal funding do appear to reduce state spending for the same purpose, reducing the return on the federal investment. Research estimates that about 50 percent of each additional federal grant dollar for the highway program displaces funds that states would otherwise have spent on highways. As we have previously reported, this situation argues for a fundamental reexamination of the federal approach to surface transportation problems and a restructuring of federal programs to create more focused, performance-based, and sustainable programs. In cases for which there is a significant national interest, maintaining strong federal financial support and a more direct federal involvement in the program may be needed. In other cases, functions may best be carried out by other levels of government or not at all. There may also be cases for which federal financial support is desirable but a more results-oriented approach is appropriate. In addition, depending on the transportation issue and the desired goals, different options and approaches may be appropriate for different problems. Restructuring the current approach to transportation problems will take time, but a vision and strategy are needed to begin the process of transforming to a set of policies and programs to effectively address the nation’s transportation needs and priorities. Through our prior analyses of existing programs, we identified a framework of principles that could help drive an assessment of proposals for restructuring and funding federal surface transportation programs. These principles include (1) creating well-defined goals based on identified areas of national interest, (2) establishing and clearly defining the federal role in achieving each goal, (3) incorporating performance and accountability into funding decisions, (4) employing the best tools and approaches to improve results and emphasize return on investment, and (5) ensuring fiscal sustainability. We have also developed a series of illustrative questions that can be used to determine the extent to which restructuring and funding proposals are aligned with each principle. We developed these principles and illustrative questions based on prior analyses of existing surface transportation programs as well as a body of work that we have developed for Congress, including GAO’s High-Risk, Performance and Accountability, and 21st Century Challenges reports. The principles do not prescribe a specific approach to restructuring or funding, but they do provide key attributes that will help ensure that restructured surface transportation programs address current challenges. Our previous work has shown that identifying areas of national interest is an important first step in any proposal to restructure and fund surface transportation programs. In identifying areas of national interest, proposals should consider existing 21st century challenges and how future trends could affect emerging areas of national importance—as well as how the national interest and federal role may vary by area. For example, experts have suggested that federal transportation policy should recognize emerging national and global imperatives, such as reducing the nation’s dependence on oil and minimizing the impact of the transportation system on global climate change. Once the various national interests in surface transportation have been identified, proposals should also clarify specific goals for federal involvement in surface transportation programs. Goals should be specific and outcome-based to ensure that resources are targeted to projects that further the national interest. The following illustrative questions can be used to determine the extent to which proposals to restructure and fund surface transportation programs create well-defined goals based on identified areas of national interest. To what extent are areas of national interest clearly defined? To what extent are areas of national interest reflective of future trends? To what extent are goals defined in relation to identified areas of national interest? After the various national interests and specific goals for federal involvement in surface transportation have been identified, the federal role in working toward each goal should be established. The federal role should be defined in relation to the roles of state and local governments, regional entities, and the private sector. Where the national interest is greatest, the federal government may play a more direct role in setting priorities and allocating resources as well as fund a higher share of program costs. Conversely, where the national interest is less evident, state and local governments and others could assume more responsibility. For example, efforts to reduce transportation’s impact on greenhouse gas emissions may warrant a greater federal role than other initiatives, such as reducing urban congestion, since the impacts of greenhouse gas emissions are widely dispersed, whereas the impacts of urban congestion may be more localized. The following illustrative questions can be used to determine the extent to which proposals to restructure and fund the surface transportation programs establish and clearly define the federal role in achieving each goal. To what extent is the federal role directly linked to defined areas of national interest and goals? To what extent is the federal role defined in relation to the roles of state and local governments, regional entities, and the private sector? To what extent does the proposal consider how the transportation system is linked to other sectors and national policies, such as environmental, security, and energy policies? Our previous work has shown that an increased focus on performance and accountability for results could help the federal government target resources to programs that best achieve intended outcomes and national transportation priorities. Tracking specific outcomes that are clearly linked to program goals could provide a strong foundation for holding grant recipients responsible for achieving federal objectives and measuring overall program performance. In particular, substituting specific performance measures for the current federal procedural requirements could help make the program more outcome-oriented. For example, if reducing congestion were an established federal goal, outcome measures for congestion, such as reduced travel time, could be incorporated into the programs to hold state and local governments responsible for meeting specific performance targets. Furthermore, directly linking the allocation of resources to the program outcomes would increase the focus on performance and accountability for results. Incorporating incentives or penalty provisions into grants can further hold grantees and recipients accountable for achieving results. The following illustrative questions can be used to determine the extent to which proposals to restructure and fund surface transportation programs incorporate performance and accountability into funding decisions. Are national performance goals identified and discussed in relation to state, regional, and local performance goals? To what extent are performance measures outcome-based? To what extent is funding linked to performance? To what extent does the proposal include provisions for holding stakeholders accountable for achieving results? We have previously reported that the effectiveness of any overall federal program design can be increased by promoting and facilitating the use of the best tools and approaches to improve results and emphasize return on investment. Importantly, given the projected growth in federal deficits, constrained state and local budgets, and looming Social Security and Medicare spending commitments, the resources available for discretionary programs will be more limited—making it imperative to maximize the national public benefits of any federal investment through a rigorous examination of the use of such funds. A number of specific tools and approaches can be used to improve results and return on investment including using economic analysis, such as benefit-cost analysis, in project selection; requiring grantees to conduct post-project evaluations; creating incentives to better utilize existing infrastructure; providing states and localities with greater flexibility to use certain tools, such as tolling and congestion pricing; and requiring maintenance-of-effort provisions in grants. Using these tools and approaches could help surface transportation programs more directly address national transportation priorities. The following illustrative questions can be used to determine the extent to which proposals to restructure and fund surface transportation programs employ the best tools and approaches to improve results and emphasize return on investment. To what extent do the proposals consider how costs and revenues will be shared among federal, state, local, and private stakeholders? To what extent do the proposals address the need better to align fees and taxes with use and benefits? To what extent are trade-offs between efficiency and equity considered? Do the tools and approaches align with the level of federal involvement in a given policy area? To what extent do the proposals provide flexibility and incentives for state and local governments to choose the most appropriate tool in the toolbox? Our previous work has shown that transportation funding, and the Highway Trust Fund in particular, faces an imbalance of revenues and expenditures and other threats to its long term sustainability. Furthermore, the sustainability of transportation funding should also be seen in the context of the broader, governmentwide problem of fiscal imbalance. The federal role in transportation funding must be reexamined to ensure that it is sustainable in this new fiscal reality. A sustainable surface transportation program will require targeted investment, with adequate return on investment, from not only the federal government but also state and local governments and the private sector. The following illustrative questions can be used to determine the extent to which proposals to restructure and fund surface transportation programs ensure fiscal sustainability. To what extent do the proposals reexamine current and future spending on surface transportation programs? Are the recommendations affordable and financially stable over the long- term? To what extent are the recommendations placed in the context of federal deficits, constrained budgets, and other spending commitments, and to what extent do they meet a rigorous examination of the use of federal funds? To what extent are recommendations considered in the context of trends that could affect the transportation system in the future, such as population growth, increased fuel efficiency, and increased freight traffic? Current concerns about the sustainability and performance of existing programs suggest that this is an opportune time for Congress to more clearly define the federal role in transportation and improve progress toward specific, nationally defined outcomes. Given the scope of the needed transformation, it may be necessary to shift policies and programs incrementally or on a pilot basis to gain practical lessons for a coherent, sustainable, and effective national program and funding structure to best serve the nation for the 21st century. Absent changes in planned spending, a variety of funding and financing options will likely be needed to address projected transportation funding shortfalls. Although some of the demand for additional investment in transportation could be reduced, there is a growing consensus that some level of additional investment in transportation is warranted. A range of options—from altering existing or introducing new funding approaches to employing various financing mechanisms—could be used to help meet the demand for additional investments. Each of these options has different merits and challenges, and the selection of any of them will likely involve trade-offs among different policy goals. Furthermore, the suitability of any of these options depends on the level of federal involvement or control that policymakers desire for a given area of policy. However, as we have reported, when infrastructure investment decisions are made based on sound evaluations, these options can lead to an appropriate blend of public and private funds to match public and private costs and benefits. Estimates from multiple sources indicate that additional investment in the transportation system could be warranted. For example, in its January 2008 report, the National Surface Transportation Policy and Revenue Study Commission (Policy Commission) recommended an annual investment of about $225 billion from all levels of government in the surface transportation system—an increase of about $140 billion from current spending levels. Similarly, the Congressional Budget Office recently estimated that an annual investment of about $165 billion in surface transportation could be economically justifiable. In addition, in its February 2008 interim report, the National Surface Transportation Infrastructure Financing Commission (Financing Commission) noted that one of its base assumptions is that there is a gap between current funding levels and investment needs. However, some of the demand for additional investment in transportation infrastructure could be reduced. We have previously reported that the ways in which revenue is generated and distributed can influence the decisions made by users as well as decision-making and programs at the state and local levels. In particular, our previous work has shown that current funding and decision-making processes provide a built-in preference for projects that build or maintain transportation infrastructure rather than try to use existing infrastructure more efficiently—which would reduce the overall demand for additional investments. CBO also recently reported that some of the demand for additional spending on infrastructure could be met by providing incentives to use existing infrastructure more efficiently. In its February 2008 interim report, the Financing Commission noted the need to use new approaches and technologies to maximize the use of current capacity. We have also previously reported that increased federal highway grants influence states and localities to substitute federal funds for funds they otherwise would have spent on highways for other purposes. Consequently, additional federal investments in transportation do not necessarily translate into commensurate levels of spending by the states and localities on transportation. Addressing this “leakage” with such tools as maintenance-of-effort requirements could maximize the effectiveness of federal investments. The principles we have identified for restructuring the surface transportation programs can also be used as a framework for considering levels of investment and the funding and financing options described below. For example, in defining the federal role in funding transportation, we have previously reported that where the national interest is greatest, having the federal government fund a higher share of program costs could be appropriate. Conversely, where the national interest is less evident, state and local governments, and others could assume more responsibility. In addition, incorporating incentives or penalty provisions into different funding and financing approaches can help ensure performance and accountability. Various existing funding approaches could be altered or new funding approaches could be developed, to help fund investments in the nation’s infrastructure. These various approaches can be grouped into two categories: taxes and user fees. A variety of taxes have been and could be used to fund the nation’s infrastructure, including excise, sales, property, and income taxes. For example, federal excise taxes on motor fuels are the primary source of funding for the federal surface transportation program. Fuel taxes are attractive because they have provided a relatively stable stream of revenues and the collection and enforcement costs are relatively low. However, fuel taxes do not currently convey to drivers the full costs of their use of the road—such as the costs of wear and tear, congestion, and pollution. Moreover, federal motor fuel taxes have not been increased since 1993—and thus the purchasing power of fuel tax revenues has eroded with inflation. As CBO has previously reported, the existing fuel taxes could be altered in a variety of ways to address this erosion, including increasing the per-gallon tax rate and indexing the rates to inflation. Some transportation stakeholders have suggested exploring the potential of using a carbon tax, or other carbon pricing strategies, to help fund infrastructure investments. In a system of carbon taxes, fossil fuel emissions would be taxed, with the tax proportional to the amount of carbon dioxide released in its combustion. Because a carbon tax could have a broad effect on consumer decisions, we have previously reported that it could be used to complement Corporate Average Fuel Economy standards, which require manufacturers meet fuel economy standards for passenger cars and light trucks to reduce oil consumption. A carbon tax would create incentives that could affect a broader range of consumer choices as well as provide revenue for infrastructure. Another funding source for infrastructure is user fees. The concept underlying user fees—that is, users pay directly for the infrastructure they use—is a long-standing aspect of many infrastructure programs. Examples of user fees that could be altered or introduced include fees based on vehicle miles traveled (VMT) on roadways; freight fees, such as a per- container charge; congestion pricing of roads; and tolling. VMT fees. To more directly reflect the amount a vehicle uses the road, users could be charged a fee based on the number of vehicle miles traveled. In 2006, the Oregon Department of Transportation conducted a pilot program designed to test the technological and administrative feasibility of a VMT fee. The pilot program demonstrated that a VMT fee could be implemented to replace the fuel tax as the principal source of transportation revenue by utilizing a Global Positioning System (GPS) to track miles driven and collecting the VMT fee ($0.012 per mile traveled) at fuel pumps that can read information from the GPS. As we have previously reported, using a GPS could also track mileage in high congestion zones, and the fee could be adjusted upward for miles driven in these areas or during more congested times of day such as rush hour—a strategy that might reduce congestion and save fuel. In addition, the system could be designed to apply different fees to vehicles, depending on their fuel economy. On the federal level, a VMT fee could be based on odometer readings, which would likely be a simpler and less costly way to implement such a program. A VMT fee—unless it is adjusted based on the fuel economy of the vehicle—does not provide incentives for customers to buy vehicles with higher fuel economy ratings because the fee depends only on mileage. Also, because the fee would likely be collected from individual drivers, a VMT fee could be expensive for the government to implement, potentially making it a less cost-effective approach than a motor fuel or carbon tax. The Oregon study also identified other challenges including concerns about privacy and technical difficulties in retrofitting vehicles with the necessary technology. Freight fees. Given the importance of freight movement to the economy, the Policy Commission recently recommended a new federal freight fee to support the development of a national program aimed at strategically expanding capacity for freight transportation. While the volume of domestic and international freight moving through the country has increased dramatically and is expected to continue growing, the capacity of the nation’s freight transportation infrastructure has not increased at the same rate as demand. To support the development of a national program for freight transportation, the Policy Commission recommended the introduction of a federal freight fee. The Policy Commission notes that a freight fee, such as a per-container charge, could help fund projects that remedy chokepoints and increase throughput. The Policy Commission also recommended that a portion of the customs duties, which are assessed on imported goods, be used to fund capacity improvements for freight transportation. The majority of customs duties currently collected, however, are deposited in the U.S. Treasury’s general fund for the general support of federal activities. Therefore, designating a portion of customs duties for surface transportation funding would not create a new source of revenue, but rather transfer funds from the general fund. Congestion pricing. As we have previously reported, congestion pricing, or road pricing, attempts to influence driver behavior by charging fees during peak hours to encourage users to shift to off-peak periods, use less congested routes, or use alternative modes. Congestion pricing can also help guide capital investment decisions for new transportation infrastructure. In particular, as congestion increases, toll rates also increase, and such increases (sometimes referred to as “congestion surcharges”) signal increased demand for physical capacity, indicating where capital investments to increase capacity would be most valuable. Furthermore, these congestion surcharges can potentially enhance mobility by reducing congestion and the demand for roads when the surcharges vary according to congestion to maintain a predetermined level of service. The most common form of congestion pricing in the United States is high-occupancy toll lanes, which are priced lanes that offer drivers of vehicles that do not meet the occupancy requirements the option of paying a toll to use lanes that are otherwise restricted for high- occupancy vehicles. Financing mechanisms can provide flexibility for all levels of government when funding additional infrastructure projects, particularly when traditional pay-as-you-go funding approaches, such as taxes or fees, are not set at high enough levels to meet demands. The federal government currently offers several programs to provide state and local governments with incentives such as bonds, loans, and credit assistance to help finance infrastructure. Financing mechanisms can create potential savings by accelerating projects to offset rapidly increasing construction costs and offer incentives for investment from state and local governments and from the private sector. However, each financing strategy is, in the final analysis, a form of debt that ultimately must be repaid with interest. Furthermore, since the federal government’s cost of capital is lower than that of the private sector, financing mechanisms, such as bonding, may be more expensive than timely, full, and up-front appropriations. Finally, if the federal government chooses to finance infrastructure projects, policy makers must decide how borrowed dollars will be repaid, either by users or by the general population either now or in the future through increases in taxes or reductions in other government services. A number of available mechanisms can be used to help finance infrastructure projects. Examples of these financing mechanisms follow. Bonding. A number of bonding strategies—including tax-exempt bonds, private activity bonds, Grant Anticipation Revenue Vehicles (GARVEE) bonds, and Grant Anticipation Notes (GAN)—offer flexibility to bridge funding gaps when traditional revenue sources are scarce. For example, state-issued GARVEE or GAN bonds provide capital in advance of expected federal funds, allowing states to accelerate highway and transit project construction and thus potentially reduce construction costs. Through April 2008, 20 states and two territories issued approximately $8.2 billion of GARVEE-type debt financing and 20 other states are actively considering bonding or seeking legislative authority to issue GARVEEs. Furthermore, SAFETEA-LU authorized the Secretary of Transportation to allocate $15 billion in tax-exempt bonds for qualified highway and surface freight transfer facilities. To date, $5.3 billion has been allocated for six projects. Several bills have been introduced in this Congress that would increase investment in the nation’s infrastructure through bonding. For example, the Build America Bonds Act would provide $50 billion in new infrastructure funding through bonding. Although bonds can provide up- front capital for infrastructure projects, they can be more expensive for the federal government than traditional federal grants. This higher expense results, in part, because the government must compensate the investors for the risks they assume through an adequate return on their investment. Loans, loan guarantees, and credit assistance. The federal government currently has two programs designed to offer credit assistance for surface transportation projects. The Transportation Infrastructure Finance and Innovation Act of 1998 (TIFIA) authorized the Federal Highway Administration to provide credit assistance, in the form of direct loans, loan guarantees, and standby lines of credit for projects of national significance. A similar program, Railroad Rehabilitation and Improvement Financing (RRIF), offers loans to acquire, improve, develop, or rehabilitate intermodal or rail equipment and develop new intermodal railroad facilities. To date, 15 TIFIA projects have been approved totaling over $4.8 billion in credit assistance and the RRIF program has approved 21 loan agreements worth more than $747 million. These programs are designed to leverage federal funds by attracting substantial nonfederal investments in infrastructure projects. However, the federal government assumes a level of risk when it makes or guarantees loans for projects financed with private investment. Revolving funds. Revolving funds can be used to dedicate capital to be loaned for qualified infrastructure projects. In general, loaned dollars are repaid, recycled back into the revolving fund, and subsequently reinvested in the infrastructure through additional loans. Such funds exist at both the federal and the state levels and are used to finance various infrastructure projects ranging from highways to water mains. For example, two federal funds support water infrastructure financing, the Clean Water State Revolving Fund for wastewater facilities, and the Drinking Water State Revolving Fund for drinking water facilities. Under each of these programs, the federal government provides seed money to states, which they supplement with their own funds. These funds are then loaned to local governments and other entities for water infrastructure construction and upgrades and various water quality projects. In addition, State Infrastructure Banks (SIBs)—capitalized with federal and state matching funds—are state-run revolving funds that make loans and provide credit enhancements and other forms of nongrant assistance to infrastructure projects. Through June 2007, 33 SIBs have made approximately 596 loan agreements worth about $6.2 billion to leverage other available funds for transportation projects across the nation. Furthermore, other funds— such as a dedicated national infrastructure bank—have been proposed to increase investment in infrastructure with a national or regional significance. A challenge for revolving funds in general is maintaining their capitalized value. Defaults on loans and inflation can reduce the capitalized value of the fund—necessitating an infusion of capital needed to continue the fund’s operations. Another important and emerging vehicle for funding investments in transportation is public-private partnerships. In February 2008 we reported on highway public-private partnerships. These arrangements show promise as a viable alternative, where appropriate, to help meet growing and costly transportation demands and have the potential to provide numerous benefits to the public sector. The highway public- private partnerships created to date have resulted in advantages from the perspective of state and local governments, such as the construction of new infrastructure without using public funding, and obtaining funds by extracting value from existing facilities for reinvestment in transportation and other public programs. For example, the state of Indiana received $3.8 billion from leasing the Indiana Toll Road and used those proceeds to fund a 10-year statewide transportation plan. Highway public-private partnerships potentially provide other benefits, including the transfer or sharing of project risks to the private sector. Such risks include those associated with construction costs and schedules and having sufficient levels of traffic and revenues to be financially viable. In addition, the public sector can potentially benefit from increased efficiencies in operations and life-cycle management, such as increased use of innovative technologies. Finally, through the use of tolling, highway public-private partnerships offer the potential to price highways to better reflect the true costs of operating and maintaining them and to increase mobility by adjusting tolls to manage demand, as well as the potential for more cost effective investment decisions by private investors. Highway public-private partnerships also entail potential costs and risks. Most importantly, there is no “free” money in public-private partnerships. While highway public-private partnerships can be used to obtain financing for highways, these funds are largely a new source of borrowed funds—a form of privately issued debt that must be repaid to private investors seeking a return on their investment by road users over what potentially could be a period of several generations. Though concession agreements can limit the extent to which a concessionaire can raise tolls, it is likely that tolls will increase on a privately operated highway to a greater extent than they would on a publicly operated toll road. To the extent that a private concessionaire gains market power by control of a road where there are not other viable travel alternatives, the potential also exists that the public could pay tolls that are higher than tolls based on the cost of the facilities, including a reasonable rate of return. Additionally, because large up-front concession payments have, in part, been used to fund immediate needs, it remains to be seen whether these agreements will provide long- term benefits to future generations who will potentially be paying progressively higher toll rates throughout the length of a concession agreement. Highway public-private partnerships are also potentially more costly than traditional public procurement—for example, there are costs associated with the need to hire financial and legal advisors. In short, while highway public-private partnerships have promise, they are not a panacea for meeting all transportation system demands. Ultimately the extent to which public-private partnerships can be used as a tool to help meet the nation’s transportation financing challenges will depend on the ability of states to effectively manage and implement them. For example, states must have appropriate enabling legislation in place and the institutional ability to manage complex contractual mechanisms— either in the form of in-house expertise or through contractors. Most importantly, the extent to which public-private partnerships can be used as a tool to help meet the nation’s transportation funding challenges will depend on how well states are able to weigh public interest considerations. The benefits of public-private partnerships are potential benefits—that is, they are not assured and can only be achieved by weighing them against potential costs and trade-offs through careful, comprehensive analysis to determine whether public-private partnerships are appropriate in specific circumstances and, if so, how best to implement them, and how best to protect the public interest. In considering the numerous issues surrounding the protection of the public interest, we reached the following conclusions in our February 2008 report on highway public-private partnerships: First, consideration of highway public-private partnerships could benefit from more consistent, rigorous, systematic, and up-front analysis. While highway public-private partnerships are fairly new in the United States, and although they are meant to serve the public interest, it is difficult to be confident that these interests are being protected when formal identification and consideration of public and national interests has been lacking, and where limited up-front analysis of public interest issues using established criteria has been conducted. Partnerships to date have identified and protected the public interest largely through terms contained in concession contracts, including maintenance and expansion requirements, protections for the workforce, and oversight and monitoring mechanisms to ensure that private partners fulfilled their obligations. While these protections are important, governments in other countries, including Australia and the United Kingdom, have developed systematic approaches to identifying and evaluating public interest before agreements are entered into, including the use of public interest criteria, as well as assessment tools, and require their use when considering private investments in public infrastructure. For example, a state government in Australia uses a public interest test to determine how the public interest would be affected in eight specific areas, including whether the views and rights of affected communities have been heard and protected and whether the process is sufficiently transparent. While similar tools have been used to some extent in the United States, their use has been more limited. Using up-front public interest analysis tools can also assist public agencies in determining the expected benefits and costs of a project and an appropriate means to deliver the project. Not using such tools may lead to certain aspects of protecting public interest being overlooked. Second, fresh thinking is needed on the appropriate federal approach. DOT has done much to promote the benefits, but comparatively little to either assist states and localities in weighing potential costs and trade-offs, nor to assess how potentially important national interests might be protected in highway public-private partnerships. This is in many respects a function of the design of the federal program as few mechanisms exist to identify potential national interests in cases where federal funds have not or will not be used. For example, although the Indiana Toll Road is part of the Interstate Highway System and most traffic on the road is interstate in nature, federal officials had little involvement in reviewing the terms of this concession agreement because minimal federal funds were used to construct it, and those funds were repaid to the federal government. The historic test of the presence of federal funding may have been relevant at a time when the federal government played a larger role in financing highways but may no longer be relevant when there are new players and multiple sources of financing, including potentially significant private money. Reexamining the federal role in transportation provides an opportunity to identify the emerging national public interests in highway public-private partnerships and determine how highway public-private partnerships fit in with national programs. On the basis of these conclusions, we recommended that Congress direct the Secretary of Transportation to develop and submit objective criteria for identifying national public interests in highway public-private partnerships, including any additional legal authority, guidance, or assessment tools that would be appropriately required. We are pleased to note that in a recent testimony before the House, the Secretary indicated a willingness to begin developing such criteria. This is no easy task, however. The recent Policy Commission report illustrates the challenges of identifying national public interests as the Policy Commission’s recommendations for future restrictions—including limiting allowable toll increases and requiring concessionaires to share revenues with the public sector—stood in sharp contrast to the dissenting views of three commissioners. We believe any potential federal restrictions on highway public-private partnerships must be carefully crafted to avoid undermining the potential benefits that can be achieved. Reexamining the federal role in transportation provides an opportunity for DOT we believe, to play a targeted role in ensuring that national interests are considered, as appropriate. The nation’s surface transportation programs are no longer producing the desired results. The reliability of the nation’s surface transportation system is declining as congestion continues to grow. Although infusing surface transportation programs with additional funding, especially in light of the projected shortfalls in the Highway Trust Fund, could be viewed as a quick and direct solution, past experience shows that increased funding for the program does not necessarily translate into improved performance. Furthermore, the nation’s current fiscal outlook may make such solutions fiscally imprudent. In addition, before additional federal funds are committed to the nation’s surface transportation programs, we believe a fundamental reexamination of the program is warranted. Such a reexamination would require reviewing the results of surface transportation programs and testing their continued relevance and relative priority. Appropriate funding sources and financing mechanisms can then be tailored for programs that continue to be relevant in today’s environment and address a national interest, such as freight movement. Over the coming months, various options to restructure and fund surface transportation programs will likely be put forward by a range of transportation stakeholders. Ultimately, Congress and other federal policymakers will have to determine which option—or which combination of options—best meets the nation’s needs. There is no silver bullet that can solve the nation’s transportation challenges, and many of the options, such as allowing greater private-sector investment in the nation’s infrastructure, could be politically difficult to implement both nationally and locally. The principles that we identified provide a framework for evaluating these various options. Although the principles do not prescribe a specific approach to restructuring and funding the programs, they do provide key attributes that will help ensure that a restructured surface transportation program addresses current challenges. We will continue to assist the Congress as it works to evaluate the various options and develop a national transportation policy for the 21st century that improves the design of transportation programs, the delivery of services, and accountability for results. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee might have. For further information on this statement, please contact JayEtta Z. Hecker at (202) 512-2834 or heckerj@gao.gov. Individuals making key contributions to this testimony were Robert Ciszewski, Nikki Clowers, Steve Cohen, Barbara Lancaster, Matthew LaTour, and Nancy Lueke. Federal User Fees: A Design Guide, GAO-08-386SP. Washington, D.C.: May 29, 2008. Physical Infrastructure: Challenges and Investment Options for the Nation’s Infrastructure, GAO-08-763T. Washington, D.C.: May 8, 2008. Surface Transportation: Restructured Federal Approach Needed for More Focused, Performance-Based, and Sustainable Programs, GAO-08-400. Washington, D.C.: March 6, 2008. Highway Public-Private Partnerships: More Rigorous Up-front Analysis Could Better Secure Potential Benefits and Protect the Public Interest, GAO-08-44. Washington, D.C.: February 8, 2008. Surface Transportation: Preliminary Observations on Efforts to Restructure Current Program, GAO-08-478T. Washington, D.C.: February 6, 2008. Congressional Directives: Selected Agencies’ Processes for Responding to Funding Instructions, GAO-08-209. Washington, D.C.: January 31, 2008. Long-Term Fiscal Outlook: Action Is Needed to Avoid the Possibility of a Serious Economic Disruption in the Future, GAO-08-411T. Washington, D.C.: January 29, 2008. Federal-Aid Highways: Increased Reliance on Contractors Can Pose Oversight Challenges for Federal and State Officials, GAO-08-198. Washington, D.C.: January 8, 2008. Freight Transportation: National Policy and Strategies Can Help Improve Freight Mobility. GAO-08-287. Washington, D.C.: January 7, 2008. A Call For Stewardship: Enhancing the Federal Government’s Ability to Address Key Fiscal and Other 21st Century Challenges. GAO-08-93SP. Washington, D.C.: December 17, 2007. Transforming Transportation Policy for the 21st Century: Highlights of a Forum. GAO-07-1210SP. Washington, D.C.: September 19, 2007. Surface Transportation: Strategies Are Available for Making Existing Road Infrastructure Perform Better. GAO-07-920. Washington, D.C.: July 26, 2007. Intermodal Transportation: DOT Could Take Further Actions to Address Intermodal Barriers. GAO-07-718. Washington, D.C.: June 20, 2007. Performance and Accountability: Transportation Challenges Facing Congress and the Department of Transportation. GAO-07-545T. Washington, D.C.: March 6, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Highway Finance: States’ Expanding Use of Tolling Illustrates Diverse Challenges and Strategies. GAO-06-554. Washington, D.C.: June 28, 2006. Highway Congestion: Intelligent Transportation Systems’ Promise for Managing Congestion Falls Short, and DOT Could Better Facilitate Their Strategic Use. GAO-05-943. Washington, D.C.: September 14, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 1, 2005. Highway and Transit Investments: Options for Improving Information on Projects’ Benefits and Costs and Increasing Accountability for Results. GAO-05-172. Washington, D.C.: January 24, 2005. Federal-Aid Highways: Trends, Effect on State Spending, and Options for Future Program Design. GAO-04-802. Washington, D.C.: August 31, 2004. Marine Transportation: Federal Financing and a Framework for Infrastructure Investments. GAO-02-1033. Washington, D.C.: September 9, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. This published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The nation has reached a critical juncture with its current surface transportation policies and programs. Demand has outpaced the capacity of the system, resulting in increased congestion. In addition, without significant changes in funding levels or planned spending, the Highway Trust Fund--the major source of federal highway and transit funding-- is projected to incur significant deficits in the years ahead. Exacerbating concerns about the solvency of the Highway Trust Fund is the federal government's bleak fiscal condition and outlook. As a result, other federal revenue sources may not be available to help solve the nation's current transportation challenges. This statement is based on a body of work that GAO has completed over the past several years for Congress. This testimony discusses (1) GAO's recent findings on the structure and performance of the current surface transportation program (GAO-08-400), (2) a framework to assess proposals for restructuring of the surface transportation program, (3) potential options to fund investments in the surface transportation system, and (4) our recent findings on the benefits, costs, and trade-offs of using public-private partnerships to help fund transportation investments (GAO-08-44). Since federal fundingfor the interstate system was established in 1956, the federal role in surface transportation has expanded to include broader goals, more programs, and a variety of program structures. Consequently, the goals of current programs are numerous and sometimes conflicting, and the federal role in these programs is unclear. For example, federal programs do not effectively address key transportation challenges, such as increasing congestion and freight demand. Many surface transportation programs are also not linked to performance of the transportation system or of the grantees, and programs often do not employ the best tools and approaches. Finally, the fiscal sustainability of the numerous highway, transit, and safety programs funded by the Highway Trust Fund is in doubt, because spending from the fund has increased without commensurate increases in revenues. A number of principles can help guide the assessment of proposals to restructure and fund federal surface transportation programs. These principles include (1) ensuring goals are well defined and focused on the national interest, (2) ensuring the federal role in achieving each goal is clearly defined, (3) ensuring accountability for results by entities receiving federal funds, (4) employing the best tools and approaches to improve results and emphasize return on targeted federal investment, and (5) ensuring fiscal sustainability. A range of options could be used to fund the growing demand for additional investment in the surface transportation system. There are two revenue sources for these additional investments: taxes and fees. Financing mechanisms, such as bonding and revolving funds, could also be used to fund transportation infrastructure projects when tax and user fee approaches are not sufficient to meet demands. However, these financing mechanisms are all forms of debt that ultimately must be repaid with interest by the general population through tax increases or reductions in government services. Each of these options has different merits and challenges, and the selection of any of them will likely involve trade-offs among different policy goals. Highway public-private partnerships show promise as a viable alternative, where appropriate, to help meet growing and costly transportation demands. The highway public-private partnerships created to date have resulted in advantages from the perspective of state and local governments, such as the construction of new infrastructure without using public funding. However, highway public-private partnerships also entail potential costs and risks including the reality that funds from public-private partnerships are largely a new source of borrowed funds--a form of privately issued debt that must be repaid to private investors. Ultimately the extent to which public-private partnerships can be used to help meet the nation's transportation funding challenges will depend on the ability of states to weigh potential benefits against potential costs and trade-offs to determine whether public-private partnerships are appropriate in specific circumstances--and if so--how best to implement them and protect the public interest.
Information security is an important consideration for any organization that depends on information systems to carry out its mission. The dramatic expansion in computer interconnectivity and the exponential increase in the use of the Internet are changing the way our government, the nation, and much of the world communicate and conduct business. However, risks are significant, and they are growing. The number of computer security incidents reported to the CERT Coordination Center® (CERT/CC) rose from 9,859 in 1999 to 21,756 in 2000. For the first six months of 2001, 15,476 incidents have been reported. As the number of individuals with computer skills has increased, more intrusion or “hacking” tools have become readily available and relatively easy to use. A potential hacker can literally download tools from the Internet and “point and click” to start a hack. According to a recent National Institute of Standards and Technology (NIST) publication, hackers post 30 to 40 new tools to hacking sites on the Internet every month. The successful cyber attacks against such well-known U.S. e- commerce Internet sites as eBay, Amazon.com, and CNN.com by a 15-year old “script kiddie” in February 2000 illustrate the risks. Without proper safeguards, these developments make it easier for individuals and groups with malicious intentions to gain unauthorized access to systems and use their access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other organizations’ sites. Government officials are increasingly concerned about federal computer systems, which process, store, and transmit enormous amounts of sensitive data and are indispensable to many federal operations. The federal government’s systems are riddled with weaknesses that continue to put critical operations at risk. Since October 1998, the Federal Computer Incident Response Center’s (FedCIRC) records have shown an increasing trend in the number of attacks targeting government systems. In 1998 FedCIRC documented 376 incidents affecting 2,732 federal civilian systems and 86 military systems. In 2000, the number of attacks rose to 586 incidents affecting 575,568 federal systems and 148 of their military counterparts. Moreover, according to FedCIRC, these numbers reflect only reported incidents, which it estimates do not include as many as 80 percent of actual security incidents. According to FedCIRC, 155 of the incidents reported, which occurred at 32 agencies, resulted in what is known as a “root compromise.” For at least five of the root compromises, government officials were able to verify that access to sensitive information had been obtained. How well federal agencies are addressing these risks is a topic of increasing interest in the executive and legislative branches. In January 2000, President Clinton issued a National Plan for Information Systems Protection and designated computer security and critical infrastructure protection a priority management objective in his fiscal year 2001 budget. The new administration, federal agencies, and private industry have collaboratively begun to prepare a new version of the national plan that will outline an integrated approach to computer security and critical infrastructure protection. The Congress, too, is increasingly interested in computer security, as evidenced by important hearings held during 1999, 2000, and 2001 on ways to strengthen information security practices throughout the federal government and on progress at specific agencies in addressing known vulnerabilities. Furthermore, in October 2000, the Congress included government information security reform provisions in the fiscal year 2001 National Defense Authorization Act. These provisions seek to ensure proper management and security for federal information systems by calling for agencies to adopt risk management practices that are consistent with those summarized in our 1998 Executive Guide. The provisions also require annual agency program reviews and Inspector General (IG) evaluations that must be reported to the Office of Management and Budget (OMB) as part of the budget process. The federal CIO Council and others have also initiated several projects that are intended to promote and support security improvements to federal information systems. Over the past year, the CIO Council, working with NIST, OMB, and us, developed the Federal Information Technology Security Assessment Framework. The framework provides agencies with a self-assessment methodology to determine the current status of their security programs and to establish targets for improvement. OMB has instructed agencies to use the framework to fulfill their annual assessment and reporting obligations. Since 1996, our analyses of information security at major federal agencies have shown that systems are not being adequately protected. Our previous reports, and those of agency IGs, describe persistent computer security weaknesses that place a variety of critical federal operations at risk of inappropriate disclosures, fraud, and disruption. This body of audit evidence has led us, since 1997, to designate computer security a governmentwide high-risk area. Our most recent summary analysis of federal information systems found that significant computer security weaknesses had been identified in 24 of the largest federal agencies, including Commerce. During December 2000 and January 2001, Commerce’s IG also reported significant computer security weaknesses in several of the department’s bureaus and, in February 2001, reported information security as a material weakness affecting the department’s ability to produce accurate data for financial statements. The report stated that there were weaknesses in several areas, including entitywide security management, access controls, software change controls, segregation of duties, and service continuity planning. Moreover, a recent IG assessment of the department’s information security program found fundamental weaknesses in the areas of policy and oversight. Also, the IG designated information security as one of the top ten management challenges for the department. Commerce’s missions are among the most diverse of the federal government’s cabinet departments, covering a wide range of responsibilities that include observing and managing natural resources and the environment; promoting commerce, regional development, and scientific research; and collecting, analyzing, and disseminating statistical information. Commerce employs about 40,000 people in 14 operating bureaus with numerous offices in the U.S. and overseas, each pursuing disparate programs and activities. Information technology (IT) is a critical tool for Commerce to support these missions. The department spends significant resources—reportedly over $1.5 billion in fiscal year 2000—on IT systems and services. As a percentage of total agency expenditures on IT, Commerce ranks among the top agencies in the federal government, with 17 percent of its $9-billion fiscal year 2000 budget reported as spent on IT. A primary mission of Commerce is to promote job creation and improved living standards for all Americans by furthering U.S. economic growth, and the seven bureaus we reviewed support this mission through a wide array of programs and services. Commerce uses IT to generate and disseminate some of the nation’s most important economic information. The International Trade Administration (ITA) promotes the export of U.S. goods and services—which amounted to approximately $1.1 trillion in fiscal year 2000. Millions of American jobs depend on exports, and with 96 percent of the world’s consumers living outside U.S. borders, international trade is increasingly important to supporting this mission. The Economics and Statistics Administration (ESA) develops, prepares, analyzes, and disseminates important indicators of the U.S. that present basic information on such key issues as economic growth, regional development, and the U.S. role in the world economy. This information is of paramount interest to researchers, business, and policymakers. The Bureau of Export Administration (BXA), whose efforts supported sales of approximately $4.2 billion in fiscal year 1999, assists in stimulating the growth of U.S. exports while protecting national security interests by helping to stop the proliferation of weapons of mass destruction. Sensitive data such as that relating to national security, nuclear proliferation, missile technology, and chemical and biological warfare reside in this bureau’s systems. Commerce’s ability to fulfill its mission depends on the confidentiality, integrity, and availability of this sensitive information. For example, export data residing in the BXA systems reflect technologies that have both civil and military applications; the misuse, modification, or deletion of these data could threaten our national security or public safety and affect foreign policy. Much of these data are also business proprietary. If it were compromised, the business could not only lose its market share, but dangerous technologies might end up in the hands of renegade nations who threaten our national security or that of other nations. Commerce’s IT infrastructure is decentralized. Although the Commerce IT Review Board approves major acquisitions, most bureaus have their own IT budgets and act independently to acquire, develop, operate, and maintain their own infrastructure. For example, Commerce has 14 different data centers, diverse hardware platforms and software environments, and 20 independently managed e-mail systems. The bureaus also develop and control their own individual networks to serve their specific needs. These networks vary greatly in size and complexity. For example, one bureau has as many as 155 local area networks and 3,000 users spread over 50 states and 80 countries. Some of these networks are owned, operated, and managed by individual programs within the same bureau. Because Commerce does not have a single, departmentwide common network infrastructure to facilitate data communications across the department, the bureaus have established their own access paths to the Internet, which they rely on to communicate with one another. In April 2001, the department awarded a contract for a $4 million project to consolidate the individual bureaus’ local area networks within its headquarters building onto a common network infrastructure. However, until this project is completed, each of the bureaus is expected to continue to configure, operate, and maintain its own unique networks. Recognizing the importance of its data and operations, in September 1993 Commerce established departmentwide information security policies that defined and assigned a full set of security responsibilities, ranging from the department level down to individual system owners and users within the bureaus. Since 1998, the Commerce CIO position has been responsible for developing and implementing the department’s information security program. An information security manager, under the direction of the CIO’s Office of Information Policy, Planning, and Review, is tasked with carrying out the responsibilities of the program. The CIO’s responsibilities for the security of classified systems have been delegated to the Office of Security. In the last 2 years, the CIO introduced several initiatives that are essential to improving the security posture of the department. After a 1999 contracted evaluation of the bureaus’ security plans determined that 43 percent of Commerce’s most critical assets did not have current information system security plans, the CIO issued a memorandum calling for the bureaus to prepare security plans that comply with federal regulations. Also, in May 2000, the Office of the CIO performed a summary evaluation of the status of all the bureaus’ information security based on the bureaus’ own self-assessments. The results determined that overall information security program compliance was minimal, that no formal information security awareness and training programs were provided by the bureaus, and that incident response capabilities were either absent or informal. The Commerce IG indicated that subsequent meetings between the Office of the CIO and the bureaus led to improvements. The Office of the CIO plans to conduct another evaluation this year and, based on a comparison with last year’s results, measure the bureaus’ success in strengthening their security postures. Finally, for the past year, the CIO attempted to restructure the department’s IT management to increase his span of control over information security within the bureaus by enforcing his oversight authority and involvement in budgeting for IT resources. The CIO resigned in May 2001 and, in June 2001, after completion of our fieldwork, the Secretary of Commerce approved a high-level IT restructuring plan. The acting CIO stated that Commerce is developing a more detailed implementation plan. A basic management objective for any organization is the protection of its information systems and critical data from unauthorized access. Organizations accomplish this objective by establishing controls that limit access to only authorized users, effectively configuring their operating systems, and securely implementing networks. However, our tests identified weaknesses in each of these control areas in all of the Commerce bureaus we reviewed. We demonstrated that individuals, both external and internal to Commerce, could compromise security controls to gain extensive unauthorized access to Commerce networks and systems. These weaknesses place the bureaus’ information systems at risk of unauthorized access, which could lead to the improper disclosure, modification, or deletion of sensitive information and the disruption of critical operations. As previously noted, because of the sensitivity of specific weaknesses, we plan to issue a report designated for “Limited Official Use,” which describes in more detail each of the computer security weaknesses identified and offers specific recommendations for correcting them. Effective system access controls provide mechanisms that require users to identify themselves and authenticate their identity, limit the use of system administrator capabilities to authorized individuals, and protect sensitive system and data files. As with many organizations, passwords are Commerce’s primary means of authenticating user identity. Because system administrator capabilities provide the ability to read, modify, or delete any data or files on the system and modify the operating system to create access paths into the system, such capabilities should be limited to the minimum access levels necessary for systems personnel to perform their duties. Also, information can be protected by using controls that limit an individual’s ability to read, modify, or delete information stored in sensitive system files. One of the primary methods to prevent unauthorized access to information system resources is through effective management of user IDs and passwords. To accomplish this objective, organizations should establish controls that include requirements to ensure that well-chosen passwords are required for user authentication, passwords are changed periodically, the number of invalid password attempts is limited to preclude password guessing, and the confidentiality of passwords is maintained and protected. All Commerce bureaus reviewed were not effectively managing user IDs and passwords to sufficiently reduce the risk that intruders could gain unauthorized access to its information systems to (1) change system access and other rules, (2) potentially read, modify, and delete or redirect network traffic, and (3) read, modify, and delete sensitive information. Specifically, systems were either not configured to require passwords or, if passwords were required, they were relatively easy to guess. For example, powerful system administrator accounts did not require passwords, allowing anyone who could connect to certain systems through the network to log on as a system administrator without having to use a password, systems allowed users to change their passwords to a blank password, completely circumventing the password control function, passwords were easily guessed words, such as “password,” passwords were the same as the user’s ID, and commonly known default passwords set by vendors when systems were originally shipped had never been changed. Although frequent password changes reduce the risk of continued unauthorized use of a compromised password, systems in four of the bureaus reviewed had a significant number of passwords that never required changing or did not have to be changed for 273 years. Also, systems in six of the seven bureaus did not limit the number of times an individual could try to log on to a user ID. Unlimited attempts allow intruders to keep trying passwords until a correct password is discovered. Further, all Commerce bureaus reviewed did not adequately protect the passwords of their system users through measures such as encryption, as illustrated by the following examples: User passwords were stored in readable text files that could be viewed by all users on one bureau’s systems. Files that store user passwords were not protected from being copied by intruders, who could then take the copied password files and decrypt user passwords. The decrypted passwords could then be used to gain unauthorized access to systems by intruders masquerading as legitimate users. Over 150 users of one system could read the unencrypted password of a powerful system administrator’s account. System administrators perform important functions in support of the operations of computer systems. These functions include defining security controls, granting users access privileges, changing operating system configurations, and monitoring system activity. In order to perform these functions, system administrators have powerful privileges that enable them to manipulate operating system and security controls. Privileges to perform these system administration functions should be granted only to employees who require such privileges to perform their responsibilities and who are specifically trained to understand and exercise those privileges. Moreover, the level of privilege granted to employees should not exceed the level required for them to perform their assigned duties. Finally, systems should provide accountability for the actions of system administrators on the systems. However, Commerce bureaus granted the use of excessive system administration privileges to employees who did not require such privileges to perform their responsibilities and who were not trained to exercise them. For example, a very powerful system administration privilege that should be used only in exceptional circumstances, such as recovery from a power failure, was granted to 20 individuals. These 20 individuals had the ability to access all of the information stored on the system, change important system configurations that could affect the system’s reliability, and run any program on the computer. Further, Commerce management also acknowledged that not all staff with access to this administrative privilege had been adequately trained. On other important systems in all seven bureaus, system administrators were sharing user IDs and passwords so that systems could not provide an audit trail of access by system administrators, thereby limiting accountability. By not effectively controlling the number of staff who exercise system administrator privileges, restricting the level of such privileges granted to those required to perform assigned duties, or ensuring that only well-trained staff have these privileges, Commerce is increasing the risk that unauthorized activity could occur and the security of sensitive information could be compromised. Access privileges to individual critical systems and sensitive data files should be restricted to authorized users. Not only does this restriction protect files that may contain sensitive information from unauthorized access, but it also provides another layer of protection against intruders who may have successfully penetrated one system from significantly extending their unauthorized access and activities to other systems. Examples of access privileges are the capabilities to read, modify, or delete a file. Privileges can be granted to individual users, to groups of users, or to everyone who accesses the system. Six of the seven bureaus’ systems were not configured to appropriately restrict access to sensitive system and/or data files. For example, critical system files could be modified by all users to allow them to bypass security controls. Also, excessive access privileges to sensitive data files such as export license applications were granted. Systems configured with excessive file access privileges are extremely vulnerable to compromise because such configurations could enable an intruder to read, modify, or delete sensitive system and data files, or to disrupt the availability and integrity of the system. Operating system controls are essential to ensure that the computer systems and security controls function as intended. Operating systems are relied on by all the software and hardware in a computer system. Additionally, all users depend on the proper operation of the operating system to provide a consistent and reliable processing environment, which is essential to the availability and reliability of the information stored and processed by the system. Operating system controls should limit the extent of information that systems provide to facilitate system interconnectivity. Operating systems should be configured to help ensure that systems are available and that information stored and processed is not corrupted. Controls should also limit the functions of the computer system to prevent insecure system configurations or the existence of functions not needed to support the operations of the system. If functions are not properly controlled, they can be used by intruders to circumvent security controls. To facilitate interconnectivity between computer systems, operating systems are configured to provide descriptive and technical information, such as version numbers and system names, to other computer systems and individuals when connections are being established. At the same time, however, systems should be configured to limit the amount of information that is made available to other systems and unidentified individuals because this information can be misused by potential intruders to learn the characteristics and vulnerabilities of that system to assist in intrusions. Systems in all bureaus reviewed were not configured to control excessive system information from exposure to potential attackers. The configuration of Commerce systems provided excessive amounts of information to anyone, including external users, without the need for authentication. Our testing demonstrated that potential attackers could collect information about systems, such as computer names, types of operating systems, functions, version numbers, user information, and other information that could be useful to circumvent security controls and gain unauthorized access. The proper configuration of operating systems is important to ensuring the reliable operation of computers and the continuous availability and integrity of critical information. Operating systems should be configured so that the security controls throughout the system function effectively and the system can be depended on to support the organization’s mission. Commerce bureaus did not properly configure operating systems to ensure that systems would be available to support bureau missions or prevent the corruption of the information relied on by management and the public. For example, in a large computer system affecting several bureaus, there were thousands of important programs that had not been assigned unique names. In some instances, as many as six different programs all shared the same name, many of which were different versions of the same program. Although typically the complexity of such a system may require the installation of some programs that are identically named and authorized programs must be able to bypass security in order to operate, there was an excessive number of such programs installed on this system, many of which were capable of bypassing security controls. Because these different programs are identically named, unintended programs could be inadvertently run, potentially resulting in the corruption of data or disruption of system operations. Also, because these powerful programs are duplicated, there is an increased likelihood that they could be misused to bypass security controls. In this same system, critical parts of the operating system were shared by the test and production systems used to process U.S. export information. Because critical parts were shared, as changes are made in the test system, these changes could also affect the production system. Consequently, changes could be made in the test system that would cause the production system to stop operating normally and shut down. Changes in the test system could also cause important Commerce data in the production system to become corrupted. Commerce management acknowledged that the isolation between these two systems needed to be strengthened to mitigate these risks. Operating system functions should be limited to support only the capabilities needed by each specific computer system. Moreover, these functions should be appropriately configured. Unnecessary operating system functions can be used to gain unauthorized access to a system and target that system for a denial-of-service attack. Poorly configured operating system functions can allow individuals to bypass security controls and access sensitive information without requiring proper identification and authentication. Unnecessary and poorly configured system functions existed on important computer systems in all the bureaus we reviewed. For example, unnecessary functions allowed us to gain access to a system from the Internet. Through such access and other identified weaknesses, we were able to gain system administration privileges on that system and subsequently gain access to other systems within other Commerce bureaus. Also, poorly configured functions would have allowed users to bypass security controls and gain unrestricted access to all programs and data. Networks are a series of interconnected IT devices and software that allow groups of individuals to share data, printers, communications systems, electronic mail, and other resources. They provide the entry point for access to electronic information assets and provide users with access to the information technologies they need to satisfy the organization’s mission. Controls should restrict access to networks from sources external to the network. Controls should also limit the use of systems from sources internal to the network to authorized users for authorized purposes. External threats include individuals outside an organization attempting to gain unauthorized access to an organization’s networks using the Internet, other networks, or dial-up modems. Another form of external threat is flooding a network with large volumes of access requests so that the network is unable to respond to legitimate requests, one type of denial-of- service attack. External threats can be countered by implementing security controls on the perimeters of the network, such as firewalls, that limit user access and data interchange between systems and users within the organization’s network and systems and users outside the network, especially on the Internet. An example of perimeter defenses is only allowing pre-approved computer systems from outside the network to exchange certain types of data with computer systems inside the network. External network controls should guard the perimeter of the network from connections with other systems and access by individuals who are not authorized to connect with and use the network. Internal threats come from sources that are within an organization’s networks, such as a disgruntled employee with access privileges who attempts to perform unauthorized activities. Also, an intruder who has successfully penetrated a network’s perimeter defenses becomes an internal threat when the intruder attempts to compromise other parts of an organization’s network security as a result of gaining access to one system within the network. For example, at Commerce, users of one bureau who have no business need to access export license information on another bureau’s network should not have had network connections to that system. External network security controls should prevent unauthorized access from outside threats, but if those controls fail, internal network security controls should also prevent the intruder from gaining unauthorized access to other computer systems within the network. None of the Commerce bureaus reviewed had effective external and internal network security controls. Individuals, both within and outside Commerce, could compromise external and internal security controls to gain extensive unauthorized access to Commerce networks and systems. Bureaus employed a series of external control devices, such as firewalls, in some, but not all, of the access paths to their networks. However, these controls did not effectively prevent unauthorized access to Commerce networks from the Internet or through poorly controlled dial-up modems that bypass external controls. For example, four bureaus had not configured their firewalls to adequately protect their information systems from intruders on the Internet. Also, six dial-up modems were installed so that anyone could connect to their network without having to use a password, thereby circumventing the security controls provided by existing firewalls. Our testing demonstrated that, once access was gained by an unauthorized user on the Internet or through a dial-up modem to one bureau’s networks, that intruder could circumvent ineffective internal network controls to gain unauthorized access to other, connected networks within Commerce. Such weak internal network controls could allow an unauthorized intruder or authorized user on one bureau’s network to change the configuration of other bureaus’ network controls so that the user could observe network traffic, including passwords and sensitive information that Commerce transmits in readable clear text, and disrupt network operations. The external and internal security controls of the different Commerce bureau networks did not provide a consistent level of security in part because bureaus independently configured and operated their networks as their own individual networks. For example, four of the bureaus we reviewed had their own independently controlled access points to the Internet. Because the different bureaus’ networks are actually logically interconnected and perform as one large interconnected network, the ineffective network security controls of one bureau jeopardize the security of other bureaus’ networks. Weaknesses in the external and internal network controls of the individual bureaus heighten the risk that outside intruders with no prior knowledge of bureau user IDs or passwords, as well as Commerce employees with malicious intent, could exploit the other security weaknesses in access and operating system controls discussed above to misuse, improperly disclose, or destroy sensitive information. In addition to logical access controls, other important controls should be in place to ensure the confidentiality, integrity, and availability of an organization’s data. These information system controls include policies, procedures, and techniques to provide appropriate segregation of duties among computer personnel, prevent unauthorized changes to application programs, and ensure the continuation of computer processing operations in case of unexpected interruption. The Commerce bureaus had weaknesses in each of these areas that heightened the risks already created by their lack of effective access controls. A fundamental technique for safeguarding programs and data is to segregate the duties and responsibilities of computer personnel to reduce the risk that errors or fraud will occur and go undetected. OMB A-130, Appendix III, requires that roles and responsibilities be divided so that a single individual cannot subvert a critical process. Once policies and job descriptions that support the principles of segregation of duties have been established, access controls can then be implemented to ensure that employees perform only compatible functions. None of the seven bureaus in our review had specific policies documented to identify and segregate incompatible duties, and bureaus had assigned incompatible duties to staff. For example, staff were performing incompatible computer operations and security duties. In another instance, the bureau’s security officer had the dual role of also being the bureau’s network administrator. These two functions are not compatible since the individual’s familiarity with system security could then allow him or her to bypass security controls either to facilitate performing administrative duties or for malicious purposes. Furthermore, none of the bureaus reviewed had implemented processes and procedures to mitigate the increased risks of personnel with incompatible duties. Specifically, none of the bureaus had a monitoring process to ensure appropriate segregation of duties, and management did not review access activity. Until Commerce restricts individuals from performing incompatible duties and implements compensating access controls, such as supervision and review, Commerce’s sensitive information will face increased risks of improper disclosure, inadvertent or deliberate misuse, and deletion, all of which could occur without detection. Also important for an organization’s information security is ensuring that only authorized and fully tested software is placed in operation. To make certain that software changes are needed, work as intended, and do not result in the loss of data and program integrity, such changes should be documented, authorized, tested, and independently reviewed. Federal guidelines emphasize the importance of establishing controls to monitor the installation of and changes to software to ensure that software functions as expected and that a historical record is maintained of all changes. We have previously reported on Commerce’s lack of policies on software change controls. Specific key controls not addressed were (1) operating system software changes, monitoring, and access and (2) controls over application software libraries including access to code, movement of software programs, and inventories of software. Moreover, implementation was delegated to the individual bureaus, which had not established written policies or procedures for managing software changes. Only three of the seven bureaus we reviewed mentioned software change controls in their system security plans, while none of the bureaus had policies or procedures for controlling the installation of software. Such policies are important to ensure that software changes do not adversely affect operations or the integrity of the data on the system. Without proper software change controls, there are risks that security features could be inadvertently or deliberately omitted or rendered inoperable, processing irregularities could occur, or malicious code could be introduced. Organizations must take steps to ensure that they are adequately prepared to cope with a loss of operational capability due to earthquakes, fires, sabotage, or other disruptions. An essential element in preparing for such catastrophes is an up-to-date, detailed, and fully tested recovery plan that covers all key computer operations. Such a plan is critical for helping to ensure that information system operations and data can be promptly restored in the event of a service disruption. OMB Circular A-130, Appendix III, requires that agency security plans assure that there is an ability to restore service sufficient to meet the minimal needs of users. Commerce policy also requires a backup or alternate operations strategy. The Commerce bureaus we reviewed had not developed comprehensive plans to ensure the continuity of service in the event of a service disruption. Described below are examples of service continuity weaknesses we identified at the seven Commerce bureaus. None of the seven bureaus had completed recovery plans for all their sensitive systems. Although one bureau had developed two recovery plans, one for its data center and another for its software development installation center, the bureau did not have plans to cover disruptions to the rest of its critical systems, including its local area network. Systems at six of the seven bureaus did not have documented backup procedures. One bureau stated that it had an agreement with another Commerce bureau to back it up in case of disruptions; however, this agreement had not been documented. One bureau stated in its backup strategy that tapes used for system recovery are neither stored off-site nor protected from destruction. For example, backup for its network file servers is kept in a file cabinet in a bureau official’s supply room, and backup tapes for a database and web server are kept on the shelf above the server. In case of a destructive event, the backups could be subject to the same damage as the primary files. Two bureaus had no backup facilities for key network devices such as firewalls. Until each of the Commerce bureaus develops and fully tests comprehensive recovery plans for all of its sensitive systems, there is little assurance that in the event of service interruptions, many functions of the organization will not effectively cease and critical data will be lost. As our government becomes increasingly dependent on information systems to support sensitive data and mission-critical operations, it is essential that agencies protect these resources from misuse and disruption. An important component of such protective efforts is the capability to promptly identify and respond to incidents of attempted system intrusions. Agencies can better protect their information systems from intruders by developing formalized mechanisms that integrate incident handling functions with the rest of the organizational security infrastructure. Through such mechanisms, agencies can address how to (1) prevent intrusions before they occur, (2) detect intrusions as they occur, (3) respond to successful intrusions, and (4) report intrusions to staff and management. Although essential to protecting resources, Commerce bureau incident handling capabilities are inadequate in preventing, detecting, responding to, and reporting incidents. Because the bureaus have not implemented comprehensive and consistent incident handling capabilities, decision- making may be haphazard when a suspected incident is detected, thereby impairing responses and reporting. Thus, there is little assurance that unauthorized attempts to access sensitive information will be identified and appropriate actions taken in time to prevent or minimize damage. Until adequate incident detection and response capabilities are established, there is a greater risk that intruders could be successful in copying, modifying, or deleting sensitive data and disrupting essential operations. Accounting for and analyzing computer security incidents are effective ways for organizations to better understand threats to their information systems. Such analyses can also pinpoint vulnerabilities that need to be addressed so that they will not be exploited again. OMB Circular A-130, Appendix III, requires agencies to establish formal incident response mechanisms dedicated to evaluating and responding to security incidents in a manner that protects their own information and helps to protect the information of others who might be affected by the incident. These formal incident response mechanisms should also share information concerning common vulnerabilities and threats within the organization as well as with other organizations. By establishing such mechanisms, agencies help to ensure that they can more effectively coordinate their activities when incidents occur. Although the Commerce CIO issued a July 1999 memorandum to all bureau CIOs outlining how to prevent, detect, respond to, and report incidents, the guidance has been inconsistently implemented. Six of the seven bureaus we reviewed have only ad hoc processes and procedures for handling incidents. None have established and implemented all of the requirements of the memo. Furthermore, Commerce does not have a centralized function to coordinate the handling of incidents on a departmentwide basis. Two preventive measures for deterring system intrusions are to install (1) software updates to correct known vulnerabilities and (2) messages warning intruders that their activities are punishable by law. First, federal guidance, industry advisories, and best practices all stress the importance of installing updated versions of operating systems and the software that supports system operations to protect against vulnerabilities that have been discovered in previously released versions. If new versions have not yet been released, “patches” that fix known flaws are often readily available and should be installed in the interim. Updating operating systems and other software to correct these vulnerabilities is important because once vulnerabilities are discovered, technically sophisticated hackers write scripts to exploit them and often post these scripts to the Internet for the widespread use of lesser skilled hackers. Since these scripts are easy to use, many security breaches happen when intruders take advantage of vulnerabilities for which patches are available but system administrators have not applied the patches. Second, Public Law 99-74 requires that a warning message be displayed upon access to all federal computer systems notifying users that unauthorized use is punishable by fines and imprisonment. Not only does the absence of a warning message fail to deter potential intruders, but, according to the law, pursuing and prosecuting intruders is more difficult if they have not been previously made fully aware of the consequences of their actions. Commerce has not fully instituted these two key measures to prevent incidents. First, many bureau systems do not have system software that has been updated to address known security exposures. For example, during our review, we discovered 20 systems with known vulnerabilities for which patches were available but not installed. Moreover, all the bureaus we reviewed were still running older versions of software used on critical control devices that manage network connections. Newer versions of software are available that correct the known security flaws of the versions that were installed. Second, in performing our testing of network security, we observed that warning messages had not been installed for several network paths into Commerce systems that we tested. Even though strong controls may not block all intrusions, organizations can reduce the risks associated with such events if they take steps to detect intrusions and the consequent misuse before significant damage can be done. Federal guidance emphasizes the importance of using detection systems to protect systems from the threats associated with increasing network connectivity and reliance on information systems. Additionally, federally funded activities, such as CERT/CC, the Department of Energy’s Computer Incident Advisory Capability, and FedCIRC are available to assist organizations in detecting and responding to incidents. Although the CIO’s July memo directs Commerce bureaus to monitor their information systems to detect unusual or suspicious activities, all the bureaus we reviewed were either not using monitoring programs or had only partially implemented their capabilities. For example, only two of the bureaus had installed intrusion detection systems. Also, system and network logs frequently had not been activated or were not reviewed to detect possible unauthorized activity. Moreover, modifications to critical operating system components were not logged, and security reports detailing access to sensitive data and resources were not sent to data owners for their review. The fact that bureaus we reviewed detected our activities only four times during the 2 months that we performed extensive external testing of Commerce networks, which included probing over 1,000 system devices, indicates that, for the most part, they are unaware of intrusions. For example, although we spent several weeks probing one bureau’s networks and obtained access to many of its systems, our activities were never detected. Moreover, during testing we identified evidence of hacker activity that Commerce had not previously detected. Without monitoring their information systems, the bureaus cannot know how, when, and who performs specific computer activities, be aware of repeated attempts to bypass security, or detect suspicious patterns of behavior such as two users with the same ID and password logged on simultaneously or users with system administrator privileges logged on at an unexpected time of the day or night. As a result, the bureaus have little assurance that potential intrusions will be detected in time to prevent or, at least, minimize damage. The CIO’s July memo also outlines how the bureaus are to respond to detected incidents. Instructions include responses such as notifying appropriate officials, deploying an on-site team to survey the situation, and isolating the attack to learn how it was executed. Only one of the seven bureaus reviewed has documented response procedures. Consequently, we experienced inconsistent responses when our testing was detected. For example, one bureau responded to our scanning of their systems by scanning ours in return. In another bureau, a Commerce employee who detected our testing responded by launching a software attack against our systems. In neither case was bureau management previously consulted or informed of these responses. The lack of documented incident response procedures increases the risk of inappropriate responses. For example, employees could take no action, take insufficient actions that fail to limit potential damage, take overzealous actions that unnecessarily disrupt critical operations, or take actions, such as launching a retaliatory attack, that could be considered improper. The CIO’s July memo specifically requires bureau employees who suspect an incident or violation to contact their supervisor and the bureau security officer, who should report the incident to the department’s information security manager. Reporting detected incidents is important because this information provides valuable input for risk assessments, helps in prioritizing security improvement efforts, and demonstrates trends of threats to an organization as a whole. The bureaus we reviewed have not been reporting all detected incidents. During our 2-month testing period, 16 incidents were reported by the seven bureaus collectively, 10 of which were generated to report computer viruses. Four of the other six reported incidents related to our testing activities, one of which was reported after our discovery of evidence of a successful intrusion that Commerce had not previously detected and reported. However, we observed instances of detected incidents that were not reported to bureau security officers or the department’s information security manager. For example, the Commerce employees who responded to our testing by targeting our systems in the two instances discussed above did not report either of the two incidents to their own bureau’s security officer. By not reporting incidents, the bureaus lack assurance that identified security problems have been tracked and eliminated and the targeted system restored and validated. Furthermore, information about incidents could be valuable to other bureaus and assist the department as a whole to recognize and secure systems against general patterns of intrusion. The underlying cause for the numerous weaknesses we identified in bureau information system controls is that Commerce does not have an effective departmentwide information security management program in place to ensure that sensitive data and critical operations receive adequate attention and that the appropriate security controls are implemented to protect them. Our study of security management best practices, as summarized in our 1998 Executive Guide, found that leading organizations manage their information security risks through an ongoing cycle of risk management. This management process involves (1) establishing a centralized management function to coordinate the continuous cycle of activities while providing guidance and oversight for the security of the organization as a whole, (2) identifying and assessing risks to determine what security measures are needed, (3) establishing and implementing policies and procedures that meet those needs, (4) promoting security awareness so that users understand the risks and the related policies and procedures in place to mitigate those risks, and (5) instituting an ongoing monitoring program of tests and evaluations to ensure that policies and procedures are appropriate and effective. However, Commerce’s information security management program is not effective in any of these key elements. Establishing a central management function is the starting point of the information security management cycle mentioned above. This function provides knowledge and expertise on information security and coordinates organizationwide security-related activities associated with the other four segments of the risk management cycle. For example, the function researches potential threats and vulnerabilities, develops and adjusts organizationwide policies and guidance, educates users about current information security risks and the policies in place to mitigate those risks, and provides oversight to review compliance with policies and to test the effectiveness of controls. This central management function is especially important to managing the increased risks associated with a highly connected computing environment. By providing coordination and oversight of information security activities organizationwide, such a function can help ensure that weaknesses in one unit’s systems do not place the entire organization’s information assets at undue risk. According to Commerce policy, broad program responsibility for information security throughout the department is assigned to the CIO. Department of Commerce Organization Order 15-23 of July 5, 2000, specifically tasks the CIO with developing and implementing the department’s information security program to ensure the confidentiality, integrity, and availability of information and IT resources. These responsibilities include developing policies, procedures, and directives for information security; providing mandatory periodic training in computer security awareness and accepted practice; and identifying and developing security plans for Commerce systems that contain sensitive information. Furthermore, the CIO is also formally charged with carrying out the Secretary’s responsibilities for computer security under OMB Circular A- 130, Appendix III, for all Commerce bureaus and the Office of the Secretary. An information security manager under the direction of the Office of the CIO is tasked with carrying out the responsibilities of the security program. These responsibilities, which are clearly defined in department policy, include developing security policies, procedures, and guidance and ensuring security oversight through reviews, which include tracking the implementation of required security controls. Commerce lacks an effective centralized function to facilitate the integrated management of the security of its information system infrastructure. At the time of our review, the CIO, who had no specific budget to fulfill security responsibilities and exercised no direct control over the IT budgets of the Commerce bureaus, stated that he believed that he did not have sufficient resources or the authority to implement the department information security program. Until February 2000, when additional staff positions were established to support the information security manager’s responsibilities, the information security manager had no staff to discharge these tasks. As of April 2001, the information security program was supported by a staff of three. Commerce policy also requires each of its bureaus to implement an information security program that includes a full range of security responsibilities. These include appointing a bureauwide information security officer as well as security officers for each of the bureau’s systems. However, the Commerce bureaus we reviewed also lack their own centralized functions to coordinate bureau security programs with departmental policies and procedures and to implement effective programs for the security of the bureaus’ information systems infrastructure. For example, four bureaus had staff assigned to security roles on a part-time basis and whose security responsibilities were treated as collateral duties. In view of the widespread interconnectivity of Commerce’s systems, the lack of a centralized approach to the management of security is particularly risky since there is no coordinated effort to ensure that minimal security controls are implemented and effective across the department. As demonstrated by our testing, intruders who succeeded in gaining access to a system in a bureau with weak network security could then circumvent the stronger network security of other bureaus. It is, therefore, unlikely that the security posture of the department as a whole will significantly improve until a more integrated security management approach is adopted and sufficient resources allotted to implement and enforce essential security measures departmentwide. As outlined in our 1998 Executive Guide, understanding the risks associated with information security is the second key element of the information security management cycle. Identifying and assessing information security risks help to determine what controls are needed and what level of resources should be expended on controls. Federal guidance requires all federal agencies to develop comprehensive information security programs based on assessing and managing risks. Commerce policy regarding information security requires (1) all bureaus to establish and implement a risk management process for all IT resources and (2) system owners to conduct a periodic risk analysis for all sensitive systems within each bureau. Commerce bureaus we reviewed are not conducting risk assessments for their sensitive systems as required. Only 3 of the bureaus’ 94 systems we reviewed had documented risk assessments, one of which was still in draft. Consequently, most of the bureaus’ systems are being operated without consideration of the risks associated with their immediate environment. Moreover, these bureaus are not considering risks outside their immediate environment that affect the security of their systems, such as network interconnections with other systems. Although OMB Circular A-130, Appendix III, specifically requires that the risks of connecting to other systems be considered prior to doing so, several bureau officials acknowledged that they had not considered how vulnerabilities in systems that interconnected with theirs could undermine the security of their own systems. Rather, the initial decision to interconnect should have been made by management based on an assessment of the risk involved, the controls in place to mitigate the risk, and the predetermined acceptable level of risk. The widespread lack of risk assessments, as evidenced by the serious access control weaknesses revealed during our testing, indicates that Commerce is doing little to understand and manage risks to its systems. Once risks have been assessed, OMB Circular A-130, Appendix III, requires agencies to document plans to mitigate these risks through system security plans. These plans should contain an overview of a system’s security requirements; describe the technical controls planned or in place for meeting those requirements; include rules that delineate the responsibilities of managers and individuals who access the system; and outline training needs, personnel controls, and continuity plans. In summary, security plans should be updated regularly to reflect significant changes to the system as well as the rapidly changing technical environment and document that all aspects of security for a system have been fully considered, including management, technical, and operational controls. None of the bureaus we reviewed had security plans for all of their sensitive systems. Of the 94 sensitive systems we reviewed, 87 had no security plans. Of the seven systems that did have security plans, none had been approved by management. Moreover, five of these seven plans did not include all the elements required by OMB Circular A-130, Appendix III. Without comprehensive security plans, the bureaus have no assurance that all aspects of security have been considered in determining the security requirements of the system and that adequate protection has been provided to meet those requirements. OMB also requires management officials to formally authorize the use of a system before it becomes operational, when a significant change occurs, and at least every 3 years thereafter. Authorization provides quality control in that it forces managers and technical staff to find the best fit for security, given technical constraints, operational constraints, and mission requirements. By formally authorizing a system for operational use, a manager accepts responsibility for the risks associated with it. Since the security plan establishes the system protection requirements and documents the security controls in place, it should form the basis for management’s decision to authorize processing. As of March 2001, Commerce management had not authorized any of the 94 sensitive systems that we identified. According to the more comprehensive data collected by the Office of the CIO in March 2000, 92 percent of all the department’s sensitive systems had not been formally authorized. The lack of authorization indicates that systems’ managers had not reviewed and accepted responsibility for the adequacy of the security controls implemented on their systems. As a result, Commerce has no assurance that these systems are being adequately protected. The third key element of computer security management, as identified during our study of information security management practices at leading organizations, is establishing and implementing policies. Security policies are important because they are the primary mechanism by which management communicates its goals and requirements. Federal guidelines require agencies to frequently update their information security policies in order to assess and counter rapidly evolving threats and vulnerabilities. Commerce’s information security policies are significantly outdated and incomplete. Developed in 1993 and partially revised in 1995, the department’s information security policies and procedures manual, Information Technology Management Handbook, Chapter 10, “Information Technology Security,” and attachment, “Information Technology Security” does not comply with OMB’s February 1996 revision to Circular A-130, Appendix III, and does not incorporate more recent NIST guidelines. For example, Commerce’s information security policy does not reflect current federal requirements for managing computer security risk on a continuing basis, authorizing processing, providing security awareness training, or performing system reviews. Moreover, because the policy was written before the explosive growth of the Internet and Commerce’s extensive use of it, policies related to the risks of current Internet usage are omitted. For example, Commerce has no departmentwide security policies on World Wide Web sites, e-mail, or networking. Further, Commerce has no departmental policies establishing baseline security requirements for all systems. For example, there is no departmental policy specifying required attributes for passwords, such as minimum length and the inclusion of special characters. Consequently, security settings differ both among bureaus and from system to system within the same bureau. Furthermore, Commerce lacks consistent policies establishing a standard minimum set of access controls. Having these baseline agencywide policies could eliminate many of the vulnerabilities discovered by our testing, such as configurations that provided users with excessive access to critical system files and sensitive data and expose excessive system information, all of which facilitate intrusions. The Director of the Office of Information Policy, Planning, and Review and the Information Security Manager stated that Commerce management recognizes the need to update the department information security policy and will begin updating the security sections of the Information Technology Management Handbook in the immediate future. The fourth key element of the security management cycle involves promoting awareness and conducting required training so that users understand the risks and the related policies and controls in place to mitigate them. Computer intrusions and security breakdowns often occur because computer users fail to take appropriate security measures. For this reason, it is vital that employees who use computer systems in their day-to-day operations are aware of the importance and sensitivity of the information they handle, as well as the business and legal reasons for maintaining its confidentiality, integrity, and availability. OMB Circular A-130, Appendix III, requires that employees be trained on how to fulfill their security responsibilities before being allowed access to sensitive systems. The Computer Security Act mandates that all federal employees and contractors who are involved with the management, use, or operation of federal computer systems be provided periodic training in information security awareness and accepted information security practice. Specific training requirements are outlined in NIST guidelines,which establish a mandatory baseline of training in security concepts and procedures and define additional structured training requirements for personnel with security-sensitive responsibilities. Overall, none of the seven bureaus had documented computer security training procedures and only one of the bureaus had documented its policy for such training. This bureau also used a network user responsibility agreement, which requires that all network users read and sign a one-page agreement describing the network rules. Officials at another bureau stated that they were developing a security awareness policy document. Although each of the seven bureaus had informal programs in place, such as a brief overview as part of the one-time general security orientation for new employees, these programs do not meet the requirements of OMB, the Computer Security Act, or NIST Special Publication 800-16. Such brief overviews do not ensure that security risks and responsibilities are understood by all managers, users, and system administrators and operators. Shortcomings in the bureaus’ security awareness and training activities are illustrated by the following examples. Officials at one bureau told us that they did not see training as an integral part of its security program, and provided an instructional handbook only to users of a specific bureau application. Another bureau used a generic computer-based training course distributed by the Department of Defense that described general computer security concepts but was not specific to Commerce’s computing environment. Also, this bureau did not maintain records to document who had participated. Another bureau had limited awareness practices in place, such as distributing a newsletter to staff, but had no regular training program. Officials at this bureau told us that they were in the process of assessing its training requirements. Only one Commerce bureau that we reviewed provided periodic refresher training. In addition, staff directly responsible for information security do not receive more extensive training than overviews since security is not considered to be a full-time function requiring special skills and knowledge. Several of the computer security weaknesses we discuss in this report indicate that Commerce employees are either unaware of or insensitive to the need for important information system controls. The final key element of the security management cycle is an ongoing program of tests and evaluations to ensure that systems are in compliance with policies and that policies and controls are both appropriate and effective. This type of oversight is a fundamental element because it demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies and corrects areas of noncompliance and ineffectiveness. For these reasons, OMB Circular A-130, Appendix III, directs that the security controls of major information systems be independently reviewed or audited at least every 3 years. Commerce policy also requires information security program oversight and tasks the program manager with performing compliance reviews of the bureaus as well as verification reviews of individual systems. The government information security reform provisions of the fiscal year 2001 National Defense Authorization Act require annual independent reviews of IT security in fiscal years 2001 and 2002. No oversight reviews of the Commerce bureaus’ systems have been performed by the staff of Commerce’s departmentwide information security program. The information security manager stated that he was not given the resources to perform these functions. Furthermore, the bureaus we reviewed do not monitor the effectiveness of their information security. Only one of the bureaus has performed isolated tests of its systems. In lieu of independent reviews, in May 2000, the Office of the CIO, using a draft of the CIO Council’s Security Assessment Framework, requested that all Commerce bureaus submit a self-assessment of the security of their systems based on the existence of risk assessments, security plans, system authorizations, awareness and training programs, service continuity plans, and incident response capabilities. This self- assessment did not require testing or evaluating whether systems were in compliance with policies or the effectiveness of implemented controls. Nevertheless, the Office of the CIO’s analysis of the self-assessments showed that 92 percent of Commerce’s sensitive systems did not comply with federal security requirements. Specifically, 63 percent of Commerce’s systems did not have security plans that comply with federal guidelines, 73 percent had no risk assessments, 64 percent did not have recovery plans, and 92 percent had not been authorized for operational use. The information security manager further stated that, because of the continued lack of resources, the Office of the CIO would not be able to test and evaluate the effectiveness of Commerce’s information security controls to comply with the government information security reform provisions requirement of the fiscal year 2001 National Defense Authorization Act. Instead, the information security manager stated that he would ask the bureaus to do another self-assessment. In future years, the information security manager intends to perform hands-on reviews as resources permit. The significant and pervasive weaknesses that we discovered in the seven Commerce bureaus we tested place the data and operations of these bureaus at serious risk. Sensitive economic, personnel, financial, and business confidential information is exposed, allowing potential intruders to read, copy, modify, or delete these data. Moreover, critical operations could effectively cease in the event of accidental or malicious service disruptions. Poor detection and response capabilities exacerbate the bureaus’ vulnerability to intrusions. As demonstrated during our own testing, the bureaus’ general inability to notice our activities increases the likelihood that intrusions will not be detected in time to prevent or minimize damage. These weaknesses are attributable to the lack of an effective information security program, that is, lack of centralized management, a risk-based approach, up-to-date security policies, security awareness and training, and continuous monitoring of the bureaus’ compliance with established policies and the effectiveness of implemented controls. These weaknesses are exacerbated by Commerce’s highly interconnected computing environment in which the vulnerabilities of individual systems affect the security of systems in the entire department, since a compromise in a single poorly secured system can undermine the security of the multiple systems that connect to it. We recommend that the Secretary direct the Office of the CIO and the bureaus to develop and implement an action plan for strengthening access controls for the department’s sensitive systems commensurate with the risk and magnitude of the harm resulting from the loss, misuse, or modification of information resulting from unauthorized access. Targeted timeframes for addressing individual systems should be determined by their order of criticality. This will require ongoing cooperative efforts between the Office of the CIO and the Commerce bureaus’ CIOs and their staff. Specifically, this action plan should address the logical access control weaknesses that are summarized in this report and will be detailed, along with corresponding recommendations, in a separate report designated for “Limited Official Use.” These weaknesses include password management controls, operating system controls, and network controls. We recommend that the Secretary direct the Office of the CIO and the Commerce bureaus to establish policies to identify and segregate incompatible duties and to implement controls, such as reviewing access activity, to mitigate the risks associated with the same staff performing these incompatible duties. We recommend that the Secretary direct the Office of the CIO and the Commerce bureaus to establish policies and procedures for authorizing, testing, reviewing, and documenting software changes prior to implementation. We recommend that the Secretary direct the Office of the CIO to require the Commerce bureaus to develop and test, at least annually, comprehensive recovery plans for all sensitive systems. We recommend that the Secretary direct the Office of the CIO to establish a departmentwide incident handling function with formal procedures for preparing for, detecting, responding to, and reporting incidents. We recommend that the Secretary direct the Office of the CIO and the Commerce bureaus to develop intrusion detection and incident response capabilities that include installing updates to system software with known vulnerabilities, installing warning banners on all network access paths, installing intrusion detection systems on networks and sensitive systems, and implementing policies and procedures for monitoring log files and audit trails on a regular schedule commensurate with risks for potentially unauthorized access to computer resources. We recommend that the Secretary direct the Office of the CIO to develop and implement an effective departmentwide security program. Such a program should include establishing a central information security function to manage an ongoing cycle of the following security activities: Assessing risks and evaluating needs, which include developing security plans for all sensitive systems that comply with federal guidelines as outlined in OMB A-130, Appendix III, and NIST SP 800-18 and formally authorizing all systems before they become operational, upon significant change, and at least every 3 years thereafter. Updating the information security program policies to comply with current federal regulations regarding risk assessments, specific security controls that must be included in security plans, management authorization to process, audits and reviews, security incidents, awareness and training, and contingency planning, address vulnerabilities associated with Commerce’s widespread use of Internet technologies, and provide minimum baseline standards for access controls to all networked systems to reduce risk in Commerce’s highly interconnected environment. Developing and implementing a computer security awareness and training program. Developing and implementing a management oversight process that includes periodic compliance reviews and tests of the effectiveness of implemented controls. This process should include audits and reviews and establish clear roles, responsibilities, and procedures for tracking identified vulnerabilities and ensuring their remediation. We also recommend that the Secretary of Commerce, the Office of the CIO, and the bureau CIOs direct the appropriate resources and authority to fulfill the security responsibilities that Commerce policy and directives task them with performing and to implement these recommendations. In addition, we recommend that the Secretary take advantage of the opportunity that the installation of the new network infrastructure will provide to improve security. Specifically, by establishing strong departmental control over the network, Commerce could require all bureaus using this common network to meet a minimum level of security standards. This would help to ensure that weaknesses in one bureau’s security will not undermine the security of all interconnecting bureaus, as is now the case. In providing written comments on a draft of this report, which are reprinted in appendix II, the Secretary of Commerce concurred with our findings and stated that Commerce is committed to improving the information security posture of the department. According to the Secretary, the bureaus we reviewed have developed and are currently implementing action plans to correct the specific problems we identified. He further stated that the heads of the Commerce bureaus have been directed to give priority to information security and to allocate sufficient resources to ensure that adequate security is in place. Moreover, the Secretary of Commerce said that he had approved an IT management restructuring plan on June 13, 2001, that would give the department CIO, as well as the bureau CIOs, new authority to strengthen the departmentwide information security program. He further stated that on July 23, 2001, he had established a task force on information security to develop a comprehensive and effective program for the department. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 10 days from the date of this letter. At that time, we will send copies to the Ranking Minority Member of the Committee; the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs; Senate Committee on Commerce, Science, and Transportation; and House Committee on Government Reform; as well as to other interested members of the Congress. We will also send copies to the Honorable Johnnie E. Frazier, Inspector General, Department of Commerce, and the Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget. If you have any questions regarding this report, please contact me at (202) 512-3317 or Elizabeth Johnston, Assistant Director, at (202) 512-6345. We can also be reached by e-mail at daceyr@gao.gov and johnstone@gao.gov respectively. Key contributors to this report are listed in appendix IV. Our objectives were to determine if the Department of Commerce has effectively implemented (1) logical access and other information system controls over its computerized data, (2) incident detection and response capabilities, and (3) an effective information security management program and related procedures. To accomplish these objectives, we applied appropriate sections of our Federal Information System Controls Audit Manual (GAO/AIMD-12.19.6), which describes our methodology for reviewing information system controls that affect the integrity, confidentiality, and availability of computerized data associated with federal agency operations. As requested by the committee, the scope of our review was focused on seven Commerce bureaus: the Bureau of Export Administration, the Economic Development Administration, the Economics and Statistics Administration, the International Trade Administration, the Minority Business Development Agency, the National Telecommunications and Information Administration, and the Office of the Secretary. All of these bureaus are based at the Hoover Building in Washington, DC and have missions related to or support for trade development, reporting, assistance, regulation, and oversight. In reviewing key logical access controls over Commerce’s computerized data, we included in the scope of our testing systems that Commerce defined as critical to the mission of the department in that their disruption would jeopardize the national interest or national requirements relating to securing the U.S. economy, national security, and the delivery of essential private sector services. We also included systems that fit OMB Circular A- 130, Appendix III’s criteria for requiring special protection, i.e. general support systems, such as local area networks, and major applications. In addition, we included (1) applications that support the department and are important for the operations of the Office of the Secretary and (2) important web servers that support the missions of the bureaus. We examined the configuration and control implementation for each of the computer operating system platforms and for each of the bureaus’ computer networks that support these bureaus’ mission-critical operations. In total, we assessed 120 systems, including 8 firewalls, 20 routers, 15 switches, and over 50 other network support or infrastructure devices. We conducted penetration tests of Commerce’s systems from both inside the Hoover building using an internal Commerce address and from a remote location through the Internet. We attempted to penetrate Commerce’s systems and exploit identified control weaknesses to verify the vulnerability they presented. We also met with Commerce officials to discuss possible reasons for vulnerabilities we identified and the department’s plans for improvement. To evaluate incident detection and response capabilities, we focused on Commerce’s ability to prevent, detect, respond to, and report incidents. We examined whether Commerce bureaus (1) installed the latest system software patches, warning banners, and intrusion detection systems to deter intruders, (2) activated and reviewed access logs to ensure that incidents were detected, (3) implemented procedures to ensure that bureaus responded to incidents in an appropriate manner, and (4) generated and reviewed incident reports. To review security program management and related procedures, we reviewed pertinent departmentwide policies, guidance, and security plans for each of the bureaus’ sensitive systems and held discussions with officials responsible for developing and implementing these policies and plans throughout Commerce. This included analyzing departmentwide and bureau policies to determine (1) their compliance with OMB and NIST guidance and (2) whether they incorporated the management best practices identified in our executive guide Information Security Management: Learning From Leading Organizations (GAO/AIMD-98-68, May 1998); meeting with officials in Commerce’s Office of the Chief Information Officer, which is responsible for managing Commerce’s information security program, to determine what actions Commerce has taken to ensure effective security program implementation; discussing security plan development and implementation with officials in Commerce’s Office of the Chief Information Officer and the seven bureaus; and reviewing system security plans from the seven bureaus to determine if they complied with Commerce’s departmentwide policies and OMB and NIST guidance. We performed our audit work from August 2000 through May 2001 in accordance with generally accepted government auditing standards. Because our work was focused on performing tests of selected computer- based security controls, we did not fully evaluate all computer controls. Consequently, additional vulnerabilities could exist that we did not identify. The Office of the Secretary (O/S) is the department’s general management arm and provides the principal support to the Secretary in formulating policy and providing advice to the President. O/S provides program leadership for the department’s functions and exercises general oversight of its operating agencies. This office includes subordinate offices that have departmentwide responsibilities or perform special program functions directly on behalf of the Secretary. The Bureau of Export Administration (BXA) is primarily responsible for administering and enforcing the nation’s system for controlling exports of sensitive dual-use goods and technologies in accordance with the Export Administration Act and regulations. BXA’s major functions include formulating and implementing export control policy; processing export license applications; conducting various policy, technical, and economic analyses; promulgating regulations; conducting industry outreach; and enforcing the Export Administration Act and regulations. The Economics and Statistics Administration (ESA) produces, analyzes, and disseminates some of the nation’s most important economic and demographic data. Important economic indicators produced by ESA include retail sales, housing starts and foreign trade. ESA houses the Economic Bulletin Board, a dial-up bulletin board system which delivers major U.S. government economic indicators from the Bureau of the Census, the Bureau of Economic Analysis, the Federal Reserve Board and the Labor Department. ESA issues federal export information and international economic data of interest to business, policy makers and researchers. ESA also provides the public with STAT-USA/Internet, an online resource updated daily that offers both domestic U.S. economic information and foreign trade information. The Economic Development Administration (EDA) provides grants to economically distressed communities to generate new employment, help retain existing jobs, and stimulate industrial and commercial growth. EDA programs help fund the construction of public works and development facilities, and are intended to promote industrial and commercial growth. One EDA program is designed to help states and local areas design and implement strategies for adjusting to changes that cause or threaten to cause serious economic damage. Another program awards grants and cooperative agreements for studies designed to increase knowledge about emerging economic development issues, determine the causes of economic distress, and locate ways to alleviate barriers to economic development. Twelve Trade Adjustment Assistance Centers around the country receive funds to provide technical assistance to certified businesses hurt by increased imports. The International Trade Administration (ITA) is responsible for promoting U.S. exports of manufactured goods, nonagricultural commodities, and services and associated trade policy issues. ITA works closely with U.S. businesses and other government agencies, including the Office of the U.S. Trade Representative and the Department of Treasury. Through its Market Access and Compliance Unit, ITA formulates and implements international economic policies to obtain market access for American firms and workers as well as compliance by foreign nations with U.S. international trade agreements. ITA also advises on international trade and investment policies pertaining to U.S. industrial sectors, carries out programs to strengthen domestic export competitiveness, and promotes U.S. industry’s increased participation in international markets. Through its Import Administration, it administers legislation that counters unfair foreign trade practices. Finally, ITA’s U.S. & Foreign Commercial Service, which has 105 domestic offices and 157 overseas posts in 84 countries, promotes the exports of U.S. companies and helps small and medium-sized businesses market their goods and services abroad. The Minority Business Development Agency’s (MBDA) mission is to promote growth and competitiveness of the nation’s minority-owned and operated businesses. MBDA seeks to improve minority business enterprise access to domestic and international marketplaces and improved opportunities in financing for business startup and expansion. MBDA provides management and technical assistance to minority individuals who own or are trying to establish a business through a network of business development centers in areas with large concentrations of minority populations and businesses. This includes assistance with planning, bidding, estimating, bonding, construction, financing, procurement, international trade matters, franchising, accounting, and marketing. MBDA has agreements with banks and other lending institutions that are intended to help minority entrepreneurs gain access to capitol for business expansion or development purposes. National Telecommunications and Information Administration The National Telecommunications and Information Administration (NTIA) serves as the President’s principal adviser on domestic and international communications and information policies pertaining to the nation’s economic and technological advancement and to regulation of the telecommunications industry. In this respect, NTIA develops and presents U.S. plans and policies at international communications conferences and related meetings, coordinates U.S. government position on communication with federal agencies, and prescribes policies that ensure effective and efficient federal use of the electromagnetic spectrum. NTIA’s program activities are designed to assist the Administration, the Congress, and regulatory agencies in addressing diverse technical and policy questions. Key contributors to this assignment were Edward Alexander, Gerald Barnes, Lon Chin, West Coile, Debra Conner, Nancy DeFrancesco, Denise Fitzpatrick, Edward Glagola, David Hayes, Brian Howe, Sharon Kittrell, Harold Lewis, Suzanne Lightman, Duc Ngo, Tracy Pierson, Kevin Secrest, Eugene Stevens, and William Wadsworth.
The Department of Commerce generates and disseminates important economic information that is of great interest to U.S. businesses, policymakers, and researchers. The dramatic rise in the number and sophistication of cyberattacks on federal information systems is of growing concern. This report provides a general summary of the computer security weaknesses in the unclassified information systems of seven Commerce organizations as well as in the management of the department's information security program. The significant and pervasive weaknesses in the seven Commerce bureaus place the data and operations of these bureaus at serious risk. Sensitive economic, personnel, financial, and business confidential information is exposed, allowing potential intruders to read, copy, modify, or delete these data. Moreover, critical operations could effectively cease in the event of accidental or malicious service disruptions. Poor detection and response capabilities exacerbate the bureaus' vulnerability to intrusions. As demonstrated during GAO's testing, the bureaus' general inability to notice GAO's activities increases the likelihood that intrusions will not be detected in time to prevent or minimize damage. These weaknesses are attributable to the lack of an effective information security program with a lack of centralized management, a risk-based approach, up-to-date security policies, security awareness and training, and continuous monitoring of the bureaus' compliance with established policies and the effectiveness of implemented controls. These weaknesses are exacerbated by Commerce's highly interconnected computing environment. A compromise in a single poorly secured system can undermine the security of the multiple systems that connect to it.
Boeing and TRW disclosed the key results and limitations of Integrated Flight Test 1A in written reports released between August 13, 1997, and April 1, 1998. The contractors explained in a report issued 60 days after the June 1997 test that the test achieved its primary objectives, but that some sensor abnormalities were noted. For example, while the report explained that the sensor detected the deployed targets and collected some usable target signals, the report also stated that some sensor components did not operate as desired and the sensor often detected targets where there were none. In December 1997, the contractors documented other test anomalies. According to briefing charts prepared for a December meeting, the Boeing sensor tested in Integrated Flight Test 1A had a low probability of detection; the sensor’s software was not always confident that it had correctly identified some target objects; the software significantly increased the rank of one target object toward the end of the flight; and in-flight calibration of the sensor was inconsistent. Additionally, on April 1, 1998, the contractors submitted an addendum to an earlier report that noted two more problems. In this addendum, the contractors disclosed that their claim that TRW’s software successfully distinguished a mock warhead from decoys during a post-flight analysis was based on tests of the software using about one-third of the target signals collected during Integrated Flight Test 1A. The contractors also noted that TRW reduced the software’s reference data so that it would correspond to the collected target signals being analyzed. Project office and Nichols Research officials said that in late August 1997, the contractors orally communicated to them all problems and limitations that were subsequently described in the December 1997 briefing and the April 1998 addendum. However, neither project officials nor contractors could provide us with documentation of these communications. Although the contractors reported the test’s key results and limitations, they described the results using some terms that were not defined. For example, one written report characterized the test as a “success” and the sensor’s performance as “excellent.” We found that the information in the contractors’ reports, in total, enabled officials in the Ground Based Interceptor Project Management Office and Nichols Research to understand the key results and limitations of the test. However, because such terms are qualitative and subjective rather than quantitative and objective, their use increased the likelihood that test results would be interpreted in different ways and might even be misunderstood. As part of our ongoing review of missile defense testing, we are examining the need for improvements in test reporting. Appendix I provides details on the test and the information disclosed. Two groups—Nichols Research Corporation and the Phase One Engineering Team— evaluated TRW’s basic discrimination software. Nichols evaluated the software by testing it against simulated warheads and decoys similar to those that the contractors were directed to design their software to handle. The evaluation concluded that although the software had some weaknesses, it met performance requirements established by Boeing in nearly all cases. However, Nichols explained that the software was successful because the simulated threat was relatively simple. Nichols said that TRW’s software was highly dependent on prior knowledge about the threat and that the test conditions that Nichols' engineers established for the evaluation included providing perfect knowledge of the features that the simulated warhead and decoys would display during the test. Nichols’ evaluation was limited because it did not test TRW’s software using actual flight data from Integrated Flight Test 1A. Nichols told us that it had planned to assess the software’s performance using real target signals collected during Integrated Flight Test 1A, but did not do so because resources were limited. Because it did not perform this assessment, Nichols cannot be said to have definitively proved or disproved TRW’s claim that its software discriminated the mock warhead from decoys using data collected from Integrated Flight Test 1A. The Phase One Engineering Team was tasked by the National Missile Defense Joint Program Office to assess the performance of TRW’s software and to complete the assessment within 2 months using available data. The team’s methodology included determining if TRW’s software was based on sound mathematical, engineering, and scientific principles and testing the software’s critical modules using data from Integrated Flight Test 1A. The team reported that although the software had weaknesses, it was well designed and worked properly, with only some changes needed to increase the robustness of the discrimination function. Further, the team reported that the results of its test of the software using Integrated Flight Test 1A data produced essentially the same results as those reported by TRW. Based on its analysis, team members predicted that the software would perform successfully in a future intercept test if target objects deployed as expected. Because the Phase One Engineering Team did not process the raw data from Integrated Flight Test 1A or develop its own reference data, the team cannot be said to have definitively proved or disproved TRW’s claim that its software successfully distinguished the mock warhead from decoys using data collected from Integrated Flight Test 1A. A team member told us its use of Boeing- and TRW-provided data was appropriate because the former TRW employee had not alleged that the contractors tampered with the raw test data or used inappropriate reference data. In assessing TRW’s Extended Kalman Filter Feature Extractor, both Nichols and the Phase One Engineering Team tested whether the Filter could be used to extract an additional feature (key characteristic) from a target object’s signal to help identify that object. Nichols tested the Filter’s ability against a number of simulated target signals and found that it was generally successful. The Phase One Engineering Team tested the Filter’s ability using the signals of one simulated target and one collected during Integrated Flight Test 1A. Both groups concluded that the Filter could feasibly provide additional information about target objects, but neither group’s evaluation allowed it to forecast whether the Filter would improve the basic software’s discrimination capability. Appendix II provides additional details on the Nichols and Phase One Engineering Team evaluations. The Department of Justice relied primarily on scientific reports, but considered information from two Army legal offices when it determined in March 1999 that it would not intervene in the false claims lawsuit brought by the former TRW employee. The scientific reports were prepared by Nichols Research Corporation and the Phase One Engineering Team. Justice’s attorneys said they also considered an opinion of the Army Space and Missile Defense Command’s legal office that said it did not consider vouchers submitted by Boeing for work performed by its subcontractor, TRW, as being false claims. In addition, the attorneys said a recommendation from the Army Legal Services Agency that Justice not intervene was a factor in their decision. It is not clear how the Army Legal Services Agency came to that decision as very little documentation is available and agency officials told us that they remember very little about the case. Appendix III provides additional information on factors that were considered in Justice’s decision. When the National Missile Defense Joint Program Office determined that another assessment of TRW’s software should be undertaken, it tasked an existing advisory group, known as the Phase One Engineering Team, to conduct this review. Comprised of various Federally Funded Research and Development Centers, this group was established in 1988 by the Strategic Defense Initiative Organization as a mechanism to provide the program office with access to a continuous, independent, and objective source of technical and engineering expertise. Since the Federally Funded Research and Development Centers are authorized, established, and operated for the express purpose of providing the government with independent and objective advice, program officials determined that making use of this existing advisory group would be sufficient to assure an independent and objective review. Program officials said that they relied upon the centers’ adherence to requirements contained in both the Federal Acquisition Regulation and their contracts and agreements with their sponsoring federal agencies to assure themselves that the review team could provide an independent, unbiased look at TRW’s software. Appendix IV provides a fuller explanation of the steps taken by the National Missile Defense Joint Program Office to assure itself that the Phase One Engineering Team would provide an independent and objective review. In commenting on a draft of this report, the Department of Defense and the Department of Justice concurred with our findings. The Department of Defense also suggested technical changes, which we incorporated as appropriate. The Department of Defense's comments are reprinted in appendix VII. The Department of Justice provided its concurrence via e-mail and had no additional comments. We conducted our review from August 2000 through February 2002 in accordance with generally accepted government auditing standards. Appendix VI provides details on our scope and methodology. The National Missile Defense Joint Program Office’s process for releasing documents significantly slowed our work. For example, the program office took approximately 4 months to release key documents, such as Nichols Research Corporation’s 1996 and 1998 evaluations of the Extended Kalman Filter Feature Extractor and Nichols’ 1997 evaluation of TRW’s discrimination software. We requested these and other documents on September 14, 2000, and received them on January 9, 2001. As arranged with your staff, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we plan to provide copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services; the Senate Committee on Appropriations, Subcommittee on Defense; the House Committee on Armed Services; and the House Committee on Appropriations, Subcommittee on Defense; and the Secretary of Defense; the Attorney General; and the Director, Missile Defense Agency. We will make copies available to others upon request. If you or your staff have any questions concerning this report, please contact Bob Levin, Director, Acquisition and Sourcing Management, on (202) 512-4841; Jack Brock, Managing Director, on (202) 512-4841; or Keith Rhodes, Chief Technologist, on (202) 512-6412. Major contributors to this report are listed in appendix VIII. Boeing and TRW disclosed the key results and limitations of an early sensor flight test, known as Integrated Flight Test 1A, to the Ground Based Interceptor Project Management Office. The contractors included some key results and limitations in written reports submitted soon after the June 1997 test, but others were not included in written reports until December 1997 or April 1998. However, according to project office and Nichols officials, all problems and limitations included in the written reports were communicated orally to the project management office in late August 1997. The deputy project office manager said his office did not report these verbal communications to others within the Program Office or the Department of Defense because the project office was the office within the Department responsible for the Boeing contract. One problem that was included in initial reports to program officials was a malfunctioning cooling mechanism that did not lower the sensor’s temperature to the desired level. Boeing characterized the mechanism’s performance as somewhat below expectations but functioning well enough for the sensor’s operation. We hired experts to determine the extent to which the problem could affect the sensor’s performance. The experts found that the cooling problem degraded the sensor’s performance in a number of ways, but would not likely result in extreme performance degradation. The experts studied only how increased noise affected the sensor’s performance regarding comparative strengths of the target signals and the noise (signal to noise ratio). The experts did not evaluate discrimination performance, which is dependent on the measurement accuracy of the collected infrared signals. The experts’ findings are discussed in more detail later in this appendix. Integrated Flight Test 1A, conducted in June 1997, was a test of the Boeing sensor—a highly sensitive, compact, infrared device, consisting of an array of silicon detectors, that is normally mounted on the exoatmospheric kill vehicle. However, in this test, a surrogate launch vehicle carried the sensor above the earth’s atmosphere to view a cluster of target objects that included a mock warhead and various decoys. When the sensor detected the target cluster, its silicon detectors began to make precise measurements of the infrared radiation emitted by the target objects. Over the tens of seconds that the target objects were within its field of view, the sensor continuously converted the infrared radiation into an electrical current, or signal, proportional to the amount of energy collected by the detectors. The sensor then digitized the signal (converted the signals into numerical values), completed a preliminary part of the planned signal processing, and formatted the signal so that it could be transmitted via a data link to a recorder on the ground. After the test, Boeing processed the signals further and formatted them so that TRW could input the signals into its discrimination software to assess its capability to distinguish the mock warhead from decoys. In post-flight ground testing, the software analyzed the processed data and identified the key characteristics, or features, of each signal. The software then compared the features it extracted to the expected features of various types of target objects. Based on this comparison, the software ranked each item according to its likelihood of being the mock warhead. TRW reported that the highest- ranked object was the mock warhead. The primary objective of Integrated Flight Test 1A was to reduce risk in future flight tests. Specifically, the test was designed to determine if the sensor could operate in space; to examine the extent to which the sensor could detect small differences in infrared emissions; to determine if the sensor was accurately calibrated; and to collect target signature data for post-mission discrimination analysis. In addition, Boeing established quantitative requirements for the test. For example, the sensor was expected to acquire the target objects at a specified distance. According to a Nichols’ engineer, Boeing established these requirements to ensure that its exoatmospheric kill vehicle, when fully developed, could destroy a warhead with the single shot precision (expressed as a probability) required by the Ground Based Interceptor Project Management Office. The engineer said that in Integrated Flight Test 1A, Boeing planned to measure its sensor’s performance against these lower-level requirements so that Boeing engineers could determine which sensor elements, including the software, required further refinement. However, the engineer told us that because of the various sensor problems, of which the contractor and project office were aware, Boeing determined before the test that it would not use most of these requirements to judge the sensor’s performance. (Although Boeing did not judge the performance of its sensor against the requirements as it originally planned, Boeing did, in some cases, report the sensor’s performance in terms of these requirements. For a summary of selected test requirements and the sensor’s performance as reported by Boeing and TRW in their August 22, 1997, report, see app. V.) Table 1 provides details on the key results and limitations of Integrated Flight Test 1A that contractors disclosed in various written reports and briefing charts. Although the contractors disclosed the key results and limitations of the flight test in written reports and in discussions, the written reports described the results using some terms that were not defined. For example, in their August 22, 1997, report, Boeing and TRW described Integrated Flight Test 1A as a “success” and the performance of the Boeing sensor as “excellent.” We asked the contractors to explain their use of these terms. We asked Boeing, for example, why it characterized its sensor’s performance as “excellent” when the sensor’s silicon detector array did not cool to the desired temperature, the sensor’s power supply created excess noise, and the sensor detected numerous false targets. Boeing said that even though the silicon detector array operated at temperatures 20 to 30 percent higher than desired, the sensor produced useful data. Officials said they knew of no other sensor that would be capable of producing any useful data under those conditions. Boeing officials went on to say that the sensor continuously produced usable, and, much of the time, excellent data in “real-time” during flight. In addition, officials said the sensor component responsible for suppressing background noise in the silicon detector array performed perfectly in space and the silicon detectors collected data in more than one wave band. Boeing concluded that the sensor’s performance allowed the test to meet all mission objectives. Based on our review of the reports and discussions with officials in the Ground Based Interceptor Project Management Office and Nichols Research, we found that the contractors’ reports, in total, contained information for those officials to understand the key results and limitations of the test. However, because terms such as “success” and “excellent” are qualitative and subjective rather than quantitative and objective, we believe their use increases the likelihood that test results would be interpreted in different ways and could even be misunderstood. As part of our ongoing review of missile defense testing, we are examining the need for improvements in test reporting. This report, sometimes referred to as the 45-day report, was a series of briefing charts. In it, contractors reported that Integrated Flight Test 1A achieved its principal objectives of reducing risks for subsequent flight tests, demonstrating the performance of the exoatmospheric kill vehicle’s sensor, and collecting target signature data. In addition, the report stated that TRW’s software successfully distinguished a mock warhead from accompanying decoys. The August 22 report, known as the 60-day report, was a lengthy document that disclosed much more than the August 13 report. As discussed in more detail below, the report explained that some sensor abnormalities were observed during the test, that some signals collected from the target objects were degraded, that the launch vehicle carrying the sensor into space adversely affected the sensor’s ability to collect target signals, and that the sensor sometimes detected targets where there were none. These problems were all noted in the body of the report, but the report summary stated that review and analysis subsequent to the test confirmed the “excellent” performance and nominal operation of all sensor subsystems. Boeing disclosed in the report that sensor abnormalities were observed during the test and that the sensor experienced a higher than expected false alarm rate. These abnormalities were (1) a cooling mechanism that did not bring the sensor’s silicon detectors to the intended operating temperature, (2) a power supply unit that created excess noise, and (3) software that did not function as designed because of the slow turnaround of the surrogate launch vehicle. In the report’s summary, Boeing characterized the cooling mechanism’s performance as somewhat below expectations but functioning well enough for the sensor’s operation. In the body of the report, Boeing said that the fluctuations in temperature could lead to an apparent decrease in sensor performance. Additionally, Boeing engineers told us that the cooling mechanism’s failure to bring the silicon detector array to the required temperature caused the detectors to be noisy. Because the discrimination software identifies objects as a warhead or a decoy by comparing the features of a target’s signal with those it expects a warhead or decoy to display, a noisy signal may confuse the software. Boeing and TRW engineers said that they and program office officials were aware that there was a problem with the sensor’s cooling mechanism before the test was conducted. However, Boeing believed that the sensor would perform adequately at higher temperatures. According to contractor documents, the sensor did not perform as well as expected, and some target signals were degraded more than anticipated. Boeing disclosed in the report that sensor abnormalities were observed during the test and that the sensor experienced a higher than expected false alarm rate. These abnormalities were (1) a cooling mechanism that did not bring the sensor’s silicon detectors to the intended operating temperature, (2) a power supply unit that created excess noise, and (3) software that did not function as designed because of the slow turnaround of the surrogate launch vehicle. In the report’s summary, Boeing characterized the cooling mechanism’s performance as somewhat below expectations but functioning well enough for the sensor’s operation. In the body of the report, Boeing said that the fluctuations in temperature could lead to an apparent decrease in sensor performance. Additionally, Boeing engineers told us that the cooling mechanism’s failure to bring the silicon detector array to the required temperature caused the detectors to be noisy. Because the discrimination software identifies objects as a warhead or a decoy by comparing the features of a target’s signal with those it expects a warhead or decoy to display, a noisy signal may confuse the software. Boeing and TRW engineers said that they and program office officials were aware that there was a problem with the sensor’s cooling mechanism before the test was conducted. However, Boeing believed that the sensor would perform adequately at higher temperatures. According to contractor documents, the sensor did not perform as well as expected, and some target signals were degraded more than anticipated. The report also referred to a problem with the sensor’s power supply unit and its effect on target signals. An expert we hired to evaluate the sensor’s performance at higher than expected temperatures found that the power supply, rather than the temperature, was the primary cause of excess noise early in the sensor’s flight. Boeing engineers told us that they were aware that the power supply was noisy before the test, but, as shown by the test, it was worse than expected. The report explained that, as expected before the flight, the slow turnaround of the massive launch vehicle on which the sensor was mounted in Integrated Flight Test 1A caused the loss of some target signals. Engineers explained to us that the sensor would eventually be mounted on the lighter, more agile exoatmospheric kill vehicle, which would move back and forth to detect objects that did not initially appear in the sensor’s field of view. The engineers said that Boeing designed software that takes into account the kill vehicle’s normal motion to remove the background noise, but the software’s effectiveness depended on the fast movement of the kill vehicle. Boeing engineers told us that, because of the slow turnaround of the launch vehicle used in the test, the target signals detected during the turnaround were particularly noisy and the software sometimes removed not only the noise but the entire signal as well. The report mentioned that the sensor experienced more false alarms than expected. A false alarm is a detection of a target that is not there. According to the experts we hired, during Integrated Flight Test 1A, the Boeing sensor often mistakenly identified noise produced by the power supply as signals from actual target objects. In a fully automated discrimination software program, a high false alarm rate could overwhelm the tracking software. Because the post-flight processing tools were not fully developed at the time of the August 13 and August 22, 1997, reports, Boeing did not rely upon a fully automated tracking system when it processed the Integrated Flight Test 1A data. Instead, a Boeing engineer manually tracked the target objects. The contractors realized, and reported to the Ground Based Interceptor Project Management Office, that numerous false alarms could cause problems in future flight tests, and they identified software changes to reduce their occurrence. On December 11, 1997, Boeing and TRW briefed officials from the Ground Based Interceptor Project Management Office and one of its support contractors on various anomalies observed during Integrated Flight Test 1A. The contractors’ briefing charts explained the effect the anomalies could have on Integrated Flight Test 3, the first planned intercept test for the Boeing exoatmospheric kill vehicle, identified potential causes of the anomalies, and summarized the solutions to mitigate their effect. While some of the anomalies included in the December 11 briefing charts were referred to in the August 13 and August 22 reports, others were being reported in writing for the first time. The anomalies referenced in the briefing charts included the sensor’s high false alarm rate, the silicon detector array’s higher-than-expected temperature, the software’s low confidence factor that it had correctly identified two target objects correctly, the sensor’s lower than expected probability of detection, and the software’s elevation in rank of one target object toward the end of the test. In addition, the charts showed that an in-flight attempt to calibrate the sensor was inconsistent. According to the charts, actions to prevent similar anomalies from occurring or impacting Integrated Flight Test 3 had in most cases already been implemented or were under way. The contractors again recognized that a large number of false alarms occurred during Integrated Flight Test 1A. According to the briefing charts, false alarms occurred during the slow turnarounds of the surrogate launch vehicle. Additionally, the contractors hypothesized that some false alarms resulted from space-ionizing events. By December 11, engineers had identified solutions to reduce the number of false alarms in future tests. As they had in the August 22, 1997, report, the contractors recognized that the silicon detector array did not cool properly during Integrated Flight Test 1A. The contractors reported that higher silicon detector array temperatures could cause noisy signals that would adversely impact the detector array’s ability to estimate the infrared intensity of observed objects. Efforts to eliminate the impact of the higher temperatures, should they occur in future tests, were on-going at the time of the briefing. Contractors observed that the confidence factor produced by the software was small for two target objects. The software equation that makes a determination as to how confident the software should be to identify a target object correctly, did not work properly for the large balloon or multiple-service launch vehicle. Corrections to the equation had been made by the time of the briefing. The charts state that the Integrated Flight Test 1A sensor had a lower than anticipated probability of detection and a high false alarm rate. Because a part of the tracking, fusion, and discrimination software was designed for a sensor with a high probability of detection and a low false alarm rate, the software did not function optimally and needed revision. Changes to prevent this from happening in future flight tests were under way. The briefing charts showed that TRW’s software significantly increased the rank of one target object just before target objects began to leave the sensor’s field of view. Although a later Integrated Flight Test 1A report stated the mock warhead was consistently ranked as the most likely target, the charts show that if in Integrated Flight Test 3 the same object’s rank began to increase, the software could select the object as the intercept target. In the briefing charts, the contractors reported that TRW made a software change in the model that is used to generate reference data. When reference data was generated with the software change, the importance of the mock warhead was increased, and it was selected as the target. Tests of the software change were in progress as of December 11. The Boeing sensor measures the infrared emissions of target objects by converting the collected signals into intensity with the help of calibration data obtained from the sensor prior to flight. However, the sensor was not calibrated at the higher temperature range that was experienced during Integrated Flight Test 1A. To remedy the problem, the sensor viewed a star with known infrared emissions. The measurement of the star’s intensity was to have helped fill the gaps in calibration data that was essential to making accurate measurements of the target object signals. Boeing disclosed that the corrections based on the star calibration were inconsistent and did not improve the match of calculated and measured target signatures. Boeing subsequently told us that the star calibration corrections were effective for one of the wavelength bands, but not for another, and that the inconsistency referred to in the briefing charts was in how these bands behaved at temperatures above the intended operating range. Efforts to find and implement solutions were in progress. On April 1, 1998, Boeing submitted a revised addendum to replace an addendum that had accompanied the August 22, 1997, report. This revised addendum was prepared in response to comments and questions submitted by officials from the Ground Based Interceptor Project Management Office, Nichols Research Corporation, and the Defense Criminal Investigative Service concerning the August 22 report. In this addendum, the contractors referred in writing to three problems and limitations that had not been addressed in earlier written test reports or the December 11 briefing. Contractors noted that a gap-filling module, which was designed to replace noisy or missing signals, did not operate as designed. They also disclosed that TRW’s analysis of its discrimination software used target signals collected during a selected portion of the flight timeline and used a portion of the Integrated Flight Test 1A reference data that corresponded to this same timeline. The April 1 addendum reported that a gap-filling module that was designed to replace portions of noisy or missing target signals with expected signal values did not operate as designed. TRW officials told us that the module’s replacement values were too conservative and resulted in a poor match between collected signals and the signals the software expected the target objects to display. The April 1, 1998, addendum also disclosed that the August 13 and August 22 reports, in which TRW conveyed that its software successfully distinguished the mock warhead from decoys, were based on tests of the software using about one-third of the target signals collected during Integrated Flight Test 1A. We talked to TRW officials who told us that Boeing provided several data sets to TRW, including the full data set. The officials said that Boeing provided target signals from the entire timeline to a TRW office that was developing a prototype version of the exoatmospheric kill vehicle’s tracking, fusion, and discrimination software, which was not yet operational. However, TRW representatives said that the test bed version of the software that TRW was using so that it could submit its analysis within 60 days of Integrated Flight Test 1A could not process the full data set. The officials said that shortly before the August 22 report was issued, the prototype version of the tracking, fusion, and discrimination software became functional and engineers were able to use the software to assess the expanded set of target signals. According to the officials, this assessment also resulted in the software’s selecting the mock warhead as the most likely target. In our review of the August 22 report, we found no analysis of the expanded set of target signals. The April 1, 1998, report, did include an analysis of a few additional seconds of data collected near the end of Integrated Flight Test 1A, but did not include an analysis of target signals collected at the beginning of the flight. Most of the signals that were excluded from TRW's discrimination analysis were collected during the early part of the flight, when the sensor’s temperature was fluctuating. TRW told us that their software was designed to drop a target object’s track if the tracking portion of the software received no data updates for a defined period. This design feature was meant to reduce false tracks that the software might establish if the sensor detected targets where there were none. In Integrated Flight Test 1A, the fluctuation of the sensor’s temperature caused the loss of target signals. TRW engineers said that Boeing recognized that this interruption would cause TRW’s software to stop tracking all target objects and restart the discrimination process. Therefore, Boeing focused its efforts on processing those target signals that were collected after the sensor’s temperature stabilized and signals were collected continuously. Some signals collected during the last seconds of the sensor’s flight were also excluded. The former TRW employee alleged that these latter signals were excluded because during this time a decoy was selected as the target. The Phase One Engineering Team cited one explanation for the exclusion of the signals. The team said that TRW stopped using data when objects began leaving the sensor’s field of view. Our review did not confirm this explanation. We reviewed the target intensities derived from the infrared frames covering that period and found that several seconds of data were excluded before objects began to leave the field of view. Boeing officials gave us another explanation. They said that target signals collected during the last few seconds of the flight were streaking, or blurring, because the sensor was viewing the target objects as it flew by them. Boeing told us that streaking would not occur in an intercept flight because the kill vehicle would have continued to approach the target objects. We could not confirm that the test of TRW’s discrimination software, as explained in the August 22, 1997, report, included all target signals that did not streak. We noted that the April 1, 1998, addendum shows that TRW analyzed several more seconds of target signals than is shown in the August 22, 1997, report. It was in these additional seconds that the software began to increase the rank of one decoy as it assessed which target object was most likely the mock warhead. However, the April 1, 1998, addendum also shows that even though the decoy’s rank increased the software continued to rank the mock warhead as the most likely target. But, because not all of the Integrated Flight Test 1A timeline was presented in the April 1 addendum, we could not determine whether any portion of the excluded timeline might have been useful data and if there were additional seconds of useful data whether a target object other than the mock warhead might have been ranked as the most likely target. The April 1 addendum also documented that portions of the reference data developed for Integrated Flight Test 1A were also excluded from the discrimination analysis. Nichols and project office officials told us the software identifies the various target objects by comparing the target signals collected from each object at a given point in their flight to the target signals it expects each object to display at that same point in the flight. Therefore, when target signals collected during a portion of the flight timeline are excluded, reference data developed for the same portion of the timeline must be excluded. Officials in the National Missile Defense Joint Program Office’s Ground Based Interceptor Project Management Office and Nichols Research told us that soon after Integrated Flight Test 1A the contractors orally disclosed all of the problems and limitations cited in the December 11, 1997, briefing and the April 1, 1998, addendum. Contractors made these disclosures to project office and Nichols Research officials during meetings that were held to review Integrated Flight Test 1A results sometime in late August 1997. The project office and contractors could not, however, provide us with documentation of these disclosures. The current Ground Based Interceptor Project Management Office deputy manager said that the problems that contractors discussed with his office were not specifically communicated to others within the Department of Defense because his office was the office within the Department responsible for the Boeing contract. The project office’s assessment was that these problems did not compromise the reported success of the mission, were similar in nature to problems normally found in initial developmental tests, and could be easily corrected. Because we questioned whether Boeing’s sensor could collect any usable target signals if the silicon detector array was not cooled to the desired temperature, we hired sensor experts at Utah State University’s Space Dynamics Laboratory to determine the extent to which the sub-optimal cooling degraded the sensor’s performance. These experts concluded that the higher temperature of the silicon detectors degraded the sensor’s performance in a number of ways, but did not result in extreme degradation. For example, the experts said the higher temperature reduced by approximately 7 percent the distance at which the sensor could detect targets. The experts also said that the rapid temperature fluctuation at the beginning and at the end of data acquisition contributed to the number of times that the sensor detected a false target. However, the experts said the major cause of the false alarms was the power supply noise that contaminated the electrical signals generated by the sensor in response to the infrared energy. When the sensor signals were processed after Integrated Flight Test 1A, the noise appeared as objects, but they were actually false alarms. Additionally, the experts said that the precision with which the sensor could estimate the infrared energy emanating from an object based on the electrical signal produced by the energy was especially degraded in one of the sensor’s two infrared wave bands. In their report, the experts said that the Massachusetts Institute of Technology’s Lincoln Laboratory analyzed the precision with which the Boeing sensor could measure infrared radiation and found large errors in measurement accuracy. The Utah State experts said that their determination that the sensor’s measurement capability was degraded in one infrared wave band might partially explain the errors found by Lincoln Laboratory. Although Boeing’s sensor did not cool to the desired temperature during Integrated Flight Test 1A, the experts found that an obstruction in gas flow rather than the sensor’s design was at fault. These experts said the sensor’s cooling mechanism was properly designed and Boeing’s sensor design was sound. Nichols Research Corporation and the Phase One Engineering Team tested TRW’s discrimination software and a planned enhancement to that software, known as the Extended Kalman Filter Feature Extractor. Nichols concluded that although it had weaknesses, the discrimination software met performance requirements established by Boeing when it was tested against a simple threat and given near perfect knowledge about the key characteristics, or features, that the target objects would display during flight. The Phase One Engineering Team reported that despite some weaknesses, TRW’s discrimination software was well designed and worked properly. Like Nichols, the team found that the software’s performance was dependent upon prior knowledge of the target objects. Because Nichols did not test the software’s capability using data collected from Integrated Flight Test 1A and the Phase One Engineering Team did not process the raw data from Integrated Flight Test 1A or develop its own reference data, neither group can be said to have definitively proved or disproved TRW’s claim that its software successfully identified the mock warhead from decoys using data collected from Integrated Flight Test 1A. From their assessments of TRW’s Extended Kalman Filter Feature Extractor, both groups concluded that it was feasible that the Filter could provide additional information about target objects, but neither group determined to what extent the Filter would improve the software’s discrimination performance. Nichols Research Corporation evaluated TRW’s discrimination software to determine if it met performance requirements developed by Boeing. Boeing established discrimination performance requirements to ensure that its exoatmospheric kill vehicle, when fully developed, could destroy a warhead with the single shot precision (expressed as a probability) required by the Ground Based Interceptor Project Management Office.The kill vehicle must perform a number of functions successfully to accurately hit-to-kill its target, such as acquiring the target cluster, discriminating the warhead from other objects, and diverting to hit the warhead. Boeing believed that if it met the performance requirements that it established for each function, including the discrimination function, the exoatmospheric kill vehicle should meet the required single shot probability of kill. To determine if TRW’s software performed as required, Nichols’ engineers obtained a copy of TRW’s software; verified that the software was based on sound scientific and engineering principles; validated that it operated as designed; and tested its performance in 48 simulated scenarios that included countermeasures, such as decoys, that the system might encounter before 2010. Nichols validated the software by obtaining a copy of the actual source code from TRW and installing the software in a Nichols computer. Engineers then examined the code line-by-line; verified its logic, data flow, and input and output; and determined that the software accurately reflected TRW's baseline design. Nichols next verified that the software performed exactly as reported by TRW. Engineers ran 13 TRW-provided test cases through the software and compared the results to those reported by TRW. Nichols reported that their results were generally consistent with TRW’s results with only minor performance differences in a few cases. After analyzing the 13 reference cases, Nichols generated additional test cases by simulating a wide-range of enemy missiles with countermeasures that included decoys. Including the 13 reference cases, Nichols analyzed the software’s performance in a total of 48 test scenarios. Because the software performed successfully in 45 of 48 simulated test cases, Nichols concluded that the system met the performance requirements established by Boeing. However, Nichols explained that the software met its requirement because it was tested against a simple threat. In addition, Nichols said that the software was given nearly perfect knowledge of the features the simulated warhead and any decoys included in each test would display. Nichols found anomalies when it simulated the performance of TRW’s software. Nichols’ December 2, 1997, report identified anomalies that prevented the software from meeting its performance requirement in 3 of the 48 cases. First, Nichols found that a software module did not work properly. (TRW used this gap-filling software module to replace missing or noisy target signals.) Second, Nichols found that the software’s target selection logic did not always work well. As a result, the probability that the software would select the simulated warhead as the target was lower than required in three of the test cases. Nichols reported inconsistencies in TRW’s software code. Engineers found that in some cases the software did not extract one particular feature from the target signals, and, in other cases, the results improved substantially when this feature was excluded. The Nichols report warned that in cases where this feature was the most important in the discrimination process, the software’s performance could be significantly degraded. Evaluation Parameters. In its 1997 report, Nichols cautioned that TRW’s software met performance requirements because the countermeasures included in the 48 tests were relatively simple. Nichols’ testing also assumed perfect knowledge about the warhead and decoys included in the simulations. Engineers told us that all 48 test cases were constructed to test the software against the simple threats that the Department of Defense believed “nations of concern” might deploy before 2010. The engineers said that their evaluation did not include tests of the software against the number and type of decoys deployed in Integrated Flight Test 1A because that threat cluster was more complex than the simple threat that contractors were required to design their software to handle. In addition, Nichols reported that in all 48 test cases perfect reference data was used—that is, the software was told what features the warhead and decoys would display during the simulations. Nichols engineers said TRW’s software is sensitive to prior knowledge about the threat and the Ground Based Interceptor Project Management Office was aware of this aspect of TRW’s design. Nichols’ evaluation was limited because it did not test TRW’s software using actual flight data from Integrated Flight Test 1A. Nichols told us that in addition to testing TRW’s discrimination software using simulated data it had also planned to assess the software’s performance using real target signals collected during Integrated Flight Test 1A. Because it did not perform this assessment, Nichols can not be said to have definitively proved or disproved TRW’s claim that its software discriminated the mock warhead from decoys using data collected from Integrated Flight Test 1A. Officials said they did not complete this aspect of the evaluation because their resources were limited. However, we noted that Nichols’ engineers had already verified TRW’s software and obtained the raw target signals collected during Integrated Flight Test 1A. These engineers told us that this assessment could be done within two weeks after Nichols received all required information. (Nichols said it did not have some needed information.) In 1998, the National Missile Defense Joint Program Office asked the Phase One Engineering Team to conduct an assessment, using available data, of TRW’s discrimination software, even though Nichols Research Corporation had already concluded that it met the requirements established by Boeing. The program office asked for the second evaluation because the Defense Criminal Investigative Service lead investigator expressed concern about the ability of Nichols to provide a truly objective evaluation. The Phase One Engineering Team developed a methodology to (1) determine if TRW’s software was consistent with scientific, mathematical, and engineering principles; (2) determine whether TRW accurately reported that its software successfully discriminated a mock warhead from decoys using data collected from Integrated Flight Test 1A; and (3) predict the performance of TRW’s basic discrimination software against Integrated Flight Test 3 scenarios. The key results of the team’s evaluation were that the software was well designed; the contractors accurately reported the results of Integrated Flight Test 1A; and the software would likely perform successfully in Integrated Flight Test 3. The primary limitation was that the team used Boeing- and TRW-processed target data and TRW-developed reference data in determining the accuracy of TRW reports for Integrated Flight Test 1A. The team began its work by assuring itself that TRW’s discrimination software was based on sound scientific, engineering, and mathematical principles and that those principles had been correctly implemented. It did this primarily by studying technical documents provided by the contractors and the program office. Next, the team began to look at the software’s performance using Integrated Flight Test 1A data. The team studied TRW’s August 13 and August 22, 1997, test reports to learn more about discrepancies that the Defense Criminal Investigative Service said it found in these reports. Team members also received briefings from the Defense Criminal Investigative Service, Boeing, TRW, and Nichols Research Corporation. Team members told us that they did not replicate TRW’s software in total. Instead, the team emulated critical functions of TRW’s discrimination software and tested those functions using data collected during Integrated Flight Test 1A. To test the ability of TRW’s software to extract the features of each target object’s signal, the team designed a software routine that mirrored TRW’s feature-extraction design. Unlike Nichols, the team did not obtain target signals collected during the test and then process those signals. Rather, the team received Integrated Flight Test 1A target signals that had been processed by Boeing and then further processed by TRW. These signals represented about one-third of the collected signals. Team members input the TRW-supplied target signals into the team’s feature- extraction software routine and extracted two features from each target signal. The team then compared the extracted features to TRW’s reports on these same features and concluded that TRW’s software-extraction process worked as reported by TRW. Next, the team acquired the results of 200 of the 1,000 simulations that TRW had run to determine the features that target objects deployed in Integrated Flight Test 1A would likely display. Using these results, team members developed reference data that the software could compare to the features extracted from Integrated Flight Test 1A target signals. Finally, the team wrote software that ranked the different observed target objects in terms of the probability that each was the mock warhead. The results produced by the team’s software were then compared to TRW’s reported results. The team did not perform any additional analysis to predict the performance of the Boeing sensor and its software in Integrated Flight Test 3. Instead, the team used the knowledge that it gained from its assessment of the software’s performance using Integrated Flight Test 1A data to estimate the software’s performance in the third flight test. In its report published on January 25, 1999, the Phase One Engineering Team reported that even though it noted some weaknesses, TRW’s discrimination software was well designed and worked properly, with only some refinement or redesign needed to increase the robustness of the discrimination function. In addition, the team reported that its test of the software using data from Integrated Flight Test 1A produced essentially the same results as those reported by TRW. The team also predicted that the Boeing sensor and its software would perform well in Integrated Flight Test 3 if target objects deployed as expected. The team’s assessment identified some software weaknesses. First, the team reported that TRW’s use of a software module to replace missing or noisy target signals was not effective and could actually hurt rather than help the performance of the discrimination software. Second, the Phase One Engineering Team pointed out that while TRW proposed extracting several features from each target-object signal, only a few of the features could be used. The Phase One Engineering Team also reported that it found TRW’s software to be fragile because the software was unlikely to operate effectively if the reference data—or expected target signals—did not closely match the signals that the sensor collected from deployed target objects. The team warned that the software’s performance could degrade significantly if incorrect reference data were loaded into the software. Because developing good reference data is dependent upon having the correct information about target characteristics, sensor-to-target geometry, and engagement timelines, unexpected targets might challenge the software. The team suggested that very good knowledge about all of these parameters might not always be available. The Phase One Engineering Team reported that the results of its evaluation using Integrated Flight Test 1A data supported TRW’s claim that in post-flight analysis its software accurately distinguished a mock warhead from decoys. The report stated that TRW explained why there were differences in the discrimination analysis included in the August 13, 1997, Integrated Flight Test 1A test report and that included in the August 22, 1997, report. According to the report, one difference was that TRW mislabeled a chart in the August 22 report. Another difference was that the August 22 discrimination analysis was based on target signals collected over a shorter period of time (see app. I for more information regarding TRW’s explanation of report differences). Team members said that they found TRW’s explanations reasonable. The Phase One Engineering Team predicted that if the targets deployed in Integrated Flight Test 3 performed as expected, TRW’s discrimination software would successfully identify the warhead as the target. The team observed that the targets proposed for the flight test had been viewed by Boeing’s sensor in Integrated Flight Test 1A and that target-object features collected by the sensor would be extremely useful in constructing reference data for the third flight test. The team concluded that given this prior knowledge, TRW’s discrimination software would successfully select the correct target even in the most stressing Integrated Flight Test 3 scenario being considered, if all target objects deployed as expected. However, the team expressed concern about the software’s capabilities if objects deployed differently, as had happened in previous flight tests. The Phase One Engineering Team’s conclusion that TRW’s software successfully discriminated is based on the assumption that Boeing’s and TRW’s input data were accurate. The team did not process the raw data collected by the sensor’s silicon detector array during Integrated Flight Test 1A or develop their own reference data by running hundreds of simulations. Instead, the team used target signature data extracted by Boeing and TRW and developed reference data from a portion of the simulations that TRW ran for its own post-flight analysis. Because it did not process the raw data from Integrated Flight Test 1A or develop its own reference data, the team cannot be said to have definitively proved or disproved TRW’s claim that its software successfully discriminated the mock warhead from decoys using data collected from Integrated Flight Test 1A. A team member told us its use of Boeing- and TRW-provided data was appropriate because the former TRW employee had not alleged that the contractors tampered with the raw test data or used inappropriate reference data. Nichols Research Corporation and the Phase One Engineering Team evaluated TRW’s Extended Kalman Filter Feature Extractor and determined that it could provide additional information to TRW’s discrimination software. However, Nichols Research told us that its evaluation was not an exhaustive analysis of the Filter’s capability, but an attempt to determine if a Kalman Filter—which is frequently used to estimate such variables as an object's position or velocity—could extract a feature from an infrared signal. The Phase One Engineering Team reported that because of the limited time available to assess both TRW’s discrimination software and the Extended Kalman Filter Feature Extractor, it did not rigorously test the Filter. Its analysis was also aimed at determining whether the Filter could extract a feature from target objects. Nichols engineers assessed TRW’s application of the Kalman Filter in 1996 and again in 1998. For both evaluations, Nichols engineers constructed a stand-alone version of the Filter (the Filter is comprised of mathematical formulas converted into software code) that the engineers believed mirrored TRW’s design. However, Nichols designed its 1996 version of the Filter from information extracted and pieced together from multiple documents and without detailed design information from TRW engineers. Nichols Research Corporation and Ground Based Interceptor Project Management Office officials said the Nichols’ engineers did not talk with TRW’s engineers about the Filter’s design because the project office was limiting communication with the contractors in order to prevent disclosure of contractors’ proprietary information during the source selection for the exoatmospheric kill vehicle. In 1996, Nichols engineers tested the Filter’s ability to extract the features of simulated signals representative of threat objects. Engineers said that under controlled conditions they attempted to determine from which signals the Filter could extract features successfully and from which signals it could not. Also, because the Filter could not begin to extract features from the target objects unless it had some advance knowledge about the signal, engineers conducted tests to determine how much knowledge about initial conditions the Filter needed. In its November 1996 report, Nichols concluded that the Filter was unlikely to enhance the capability of TRW’s discrimination software. The assessment showed that the Filter could not extract the features of a signal unless the Filter had a great deal of advance knowledge about the signal. It also showed that the Filter was sensitive to “noise” (undesirable energy that degrades the target signal). By 1998, the competitive phase of the exoatmospheric kill vehicle contracts was over. Based on additional understanding of the Filter’s implementation, coupled with its proposed candidacy as an upgrade to the discrimination software, the Ground Based Interceptor Project Management Office asked Nichols to test the Filter again. Nichols engineers were now able to hold discussions with TRW engineers regarding their respective Filter designs. From these discussions, Nichols learned that it had designed two elements of the Filter differently from TRW. The primary difference was in the number of filters that Nichols and TRW used to preprocess the infrared signals before the feature extraction began. Nichols’ design included only one pre-processing filter, while TRW’s included several. There was also one less significant difference, which was the difference in a delay time before feature extraction began. Nichols modified its version to address these differences. In its second assessment, Nichols again examined the feature extraction capability of the Filter. Engineers pointed out that in both assessments the Filter was tested as stand-alone software, not as an integrated part of TRW’s discrimination software program. The new tests showed that the redesigned Filter could perform well against the near-term threat. However, in its report, Nichols expressed reservations that unless the target and specifics of the target’s deployment were well defined, the Filter’s performance would likely be sub-optimal. Nichols also pointed out that the Filter was unlikely to perform well against targets that exhibited certain characteristics. Nichols tested the ability of the Extended Kalman Filter Feature Extractor to extract features over a wide range of object dynamics and characteristics, including elements of the far-term threat. Nichols demonstrated the Filter's ability to extract information (features), but did not assess the Filter's potential impact on the TRW discrimination design. Because it did not assess the discrimination capability of the Extended Kalman Filter, Nichols could not predict how the Filter would have performed against either the target complex for Integrated Flight Test 1A or the target complex proposed for Integrated Flight Test 3. Target sets for Integrated Flight Test 1A and initially proposed for Integrated Flight Test 3 were more complex than the near-term threat that Nichols tested the Filter against. In their discussions with us, Nichols’ engineers stressed that their assessments should be viewed as an evaluation of a technology concept, not an evaluation of a fully integrated component of the discrimination software. Engineers admitted that their approach to this assessment was less thorough than the evaluation they conducted of TRW’s discrimination software and that engineers did not fully understand why the additional bank of pre-processing filters improved the Filter’s performance. They said a more systematic analysis would be needed to fully evaluate the Filter’s performance. The National Missile Defense Joint Program Office did not originally ask the Phase One Engineering Team to evaluate TRW’s application of the Kalman Filter. However, the team told us that program officials later asked them to do a quick assessment as an addition to their evaluation of TRW’s software. Team members designed an Extended Kalman Filter Feature Extractor similar to TRW’s. Like Nichols first design, the Phase One Engineering Team’s design was not identical to TRW’s Filter. In fact, the team did not include any filters to preprocess the infrared signals before the feature extraction began. The Phase One Engineering Team tested the capability of its Filter against one simulated target object and one of the objects whose signal was collected during Integrated Flight Test 1A. The team reported that the Filter did stabilize and extract the features of the objects’ infrared signals. However, the team added the caveat that the Filter would need good initial knowledge about the target object before it could begin the extraction process. The team reported that its evaluation of the Filter was limited. It said it did not evaluate the Filter's sensitivity to noise, the information the Filter needed to begin operation, or the extent to which the Filter would improve the performance of the discrimination software. Before deciding in March 1999 not to intervene in the False Claims lawsuit brought by the former TRW employee, the Department of Justice considered scientific reports and information from two Army sources. Specifically, Justice relied upon evaluations of TRW’s software conducted by the Nichols Research Corporation and the Phase One Engineering Team (see appendix II for more information on these evaluations), information provided by the Army Space and Missile Defense Command, and a recommendation made by the Army Legal Services Agency. Justice officials told us that the input of the Space and Missile Defense Command carried more weight in the decision-making process than the recommendation by the Army Legal Services Agency because the Command is the contracting agency for the kill vehicle and is therefore more familiar with the contractors involved as well as the technical details of the lawsuit. The Army Space and Missile Defense Command was brought into this matter in response to an inquiry by the Department of Justice concerning the vouchers that were submitted for cost reimbursement by Boeing for work performed by its subcontractor, TRW. Specifically, Justice asked whether the Army would have paid the contractor's vouchers if Boeing and TRW had misrepresented the capabilities of the software in the vouchers. In a letter to Justice, dated February 24, 1999, the Command stated that the Army did not consider the vouchers submitted by Boeing for TRW’s work to be false claims. The letter cited the Nichols’ and Phase One Engineering Team’s reports as support for its conclusions and noted that a cost-reimbursement research-and-development contract only requires that the contractor exercise its “best efforts.” There is some uncertainty about how the Army Legal Services Agencycame to recommend in February 1999 that Justice not intervene in the lawsuit. Army Legal Services had very little documentation to explain the recommendation, and agency officials told us that they remember very little about the case. The agency’s letter stated that it was basing its recommendation on conversations with investigators handling the case and on the former TRW engineer’s wishes. However, the lead investigator in the case (from the Defense Criminal Investigative Service) stated that he and his team had not recommended to the Army that the case not proceed. The little documentation available shows only that the case attorney’s predecessor spoke with the lead investigator shortly after the case was opened. Officials said they could not remember why they cited conversations with case investigators in the letter and agreed that there were no other investigators aside from those in the Defense Criminal Investigative Service. One official stressed that the letter does not explicitly say that the investigators recommended nonintervention. As for the engineer’s wishes, Army Legal Services has no record of direct contacts with the engineer, and agency officials acknowledged that they probably obtained information about the engineer’s wishes from Justice. Agency officials also said they could not remember why they cited the engineer’s wishes in their letter. The engineer told us that she did tell Justice that if it was not going to help, it should not hinder the case. The engineer also told us that this may have been misinterpreted by the agency as a refusal of any help. Justice officials agreed that the engineer consistently wanted Justice to take up the case. Legal Services Agency officials noted that it would be very unusual for someone not to want help from Justice, especially considering that less than 10 percent of False Claims cases succeed when Justice is not involved. Army Legal Services Agency officials said that the case was one of several hundred the agency handles at any one time and that their involvement in a case like this one is usually minimal, unless the agency is involved in the prosecution. The officials stated that the Army Space and Missile Defense Command letter likely would have influenced their own letter because the Command’s deputy counsel was recognized for his expertise in matters of procurement fraud. They also said that they relied on Justice to provide information about technical details of the case. The case attorney stated that he had not reviewed the Phase One Engineering Team or Nichols studies. The Defense Criminal Investigative Service, which was investigating the allegations against Boeing and TRW, asked the National Missile Defense Joint Program Office to establish an independent panel to evaluate the capability and performance of TRW’s discrimination software. Although Nichols Research Corporation, a support contractor overseeing Boeing’s work, had already conducted such an assessment and reported that the software met requirements, the case investigator was concerned about the ability of Nichols to provide a truly objective assessment. In response to the investigator’s request, the program office utilized an existing advisory group, known as the Phase One Engineering Team, to conduct the second assessment. Comprised of various Federally Funded Research and Development Centers, this group had been established by the Strategic Defense Initiative Organization in 1988 in order to provide the program office access to a continuous, independent and objective source of technical and engineering expertise. Since Federally Funded Research and Development Centers are expressly authorized, established and operated to provide the government with independent and objective advice, the Joint Program Office officials determined that making use of such a group would be sufficient to assure an independent and objective review. Scientific associations, however, said that there are alternative ways of choosing a panel to review contentious issues. Nonetheless, program officials said that establishing a review team using such methods would likely have increased the time the reviewers needed to complete their work and could have increased the cost of the review. When the National Missile Defense Joint Program Office determined that it should undertake a review of the TRW discrimination software because of allegations that contractors had misrepresented their work, it turned to the Phase One Engineering Team. The Phase One Engineering Team was established in 1988 by the Strategic Defense Initiative Organization—later known as the Ballistic Missile Defense Organization—as an umbrella mechanism to obtain technical and engineering support from Federally Funded Research and Development Centers. To ensure that the individual scientists who work on each review undertaken by the Phase One Engineering Team have the requisite expertise, membership on each review team varies with each assignment. When asked to advise a program, the director of the Phase One Engineering Team determines which Federally Funded Research and Development Centers have the required expertise. The director then contacts officials at those centers to identify the appropriate scientists for the task. According to the director, the National Missile Defense Joint Program Office does not dictate the individuals who work on a Phase One Engineering Team review. When the director received the request to conduct a review of TRW’s discrimination software, he determined there were three Federally Funded Research and Development Centers best suited to undertake this review. A total of five scientists were then selected from these three centers to comprise the review team: one member from the Aerospace Corporation, sponsored by the U.S. Air Force; two members from the Massachusetts Institute of Technology’s Lincoln Laboratory, also sponsored by the U.S. Air Force; and two members from the Lawrence Livermore National Laboratory, sponsored by the Department of Energy. The federal government established the Federally Funded Research and Development Centers to meet special or long-term research or development needs of the sponsoring federal government agencies that were not being met effectively by existing in-house or contractor resources. The federal government enters into long-term relationships with the Federally Funded Research and Development Centers in order to encourage them to provide the continuity that allows them to attract high quality personnel who will maintain their expertise, retain their objectivity and independence, preserve familiarity with the government’s needs, and provide a quick response capability. To achieve these goals, the Federally Funded Research and Development Centers must have access, beyond that required in normal contractual relationships with the government, to government and supplier information, sensitive or proprietary data, and to employees and facilities. Because of this special access, the Federally Funded Research and Development Centers are required by the Federal Acquisition Regulation and agreements with their sponsoring agencies to operate in the public interest with objectivity and independence, to be free from organizational conflicts of interest, and to fully disclose their affairs to the sponsoring agency. To further ensure that they are free from organizational conflicts of interest, Federally Funded Research and Development Centers are operated, managed, and/or administered by a university or consortium of universities; other not-for-profit or non-profit organization; or an industrial firm, as an autonomous organization or as an identifiable separate operating unit of a parent organization. All three of the Federally Funded Research and Development Centers involved in this review had entered into sponsoring agreements and contracts with their respective sponsoring agencies that contain the requirements imposed on such Centers by the Federal Acquisition Regulation. For example, the sponsoring agreement between the Air Force and Lincoln Laboratory requires that Lincoln Laboratory avoid any action that would put its personnel in perceived or actual conflicts of interest regarding either unfair competition or objectivity. Joint Program Office officials said they relied upon adherence to the governing regulations and sponsoring agreements to assure themselves that the members of this review team could provide a fresh, unbiased look at TRW’s software. Officials with whom we spoke expressed confidence in the team’s independence. Justice officials said that they had no reason to doubt the objectivity or independence of the review team’s members nor the seriousness and thoroughness of their effort. The Phase One Engineering Team director told us that independence is a program goal and that their reviews report the technical truth regardless of what the National Missile Defense Joint Program Office might want to hear. The director noted that the best way to ensure independence is to have the best scientists from different organizations discuss the technical merits of an issue. At your request, we spoke with officials of the National Academy of Sciences and the American Physical Society who told us that there are alternative ways to choose a panel. One method commonly used by these scientific organizations, which frequently conduct studies and evaluate reports or journal articles, is peer review. According to a GAO report that studied federal peer review practices, peer review is a process wherein scientists with knowledge and expertise equal to that of the researchers whose work they review make an independent assessment of the technical or scientific merit of that research. According to the Phase One Engineering Team director, the evaluation performed by the team assigned to review TRW’s software was a type of peer review. However, National Academy of Sciences and American Physical Society officials told us that since individuals knowledgeable in a given area often have opinions or biases, an unbiased study team should include members who would, as a group, espouse a broad spectrum of opinions and interests. Such a team should include both supporters and critics of the issue being studied. These officials told us that it was their opinion that the Phase One Engineering Team members are “insiders” who are unlikely to be overly critical of the National Missile Defense program. The National Missile Defense Joint Program Office official who requested that the Phase One Engineering Team conduct such a review said that he could have appointed a panel such as that suggested by the National Academy of Sciences and the American Physical Society. But he said that he wanted a panel that was already knowledgeable about warhead discrimination in space and required little additional knowledge to complete its review. The official noted that the team’s report was originally intended to be a one-to-two-month effort, even though it eventually took about eight months to complete. Some additional time was required to address further issues raised by the Defense Criminal Investigative Service. A team member said that the statement of work was defined so that the panel could complete the evaluation in a timely manner with the data available. Officials of the National Academy of Sciences and the American Physical Society acknowledged that convening a panel such as the type they suggested would likely have required more time and could have been more costly. The table below includes selected requirements that Boeing established before the flight test to evaluate sensor performance and the actual sensor performance characteristics that Boeing and TRW discussed in the August 22 report. We determined whether Boeing and TRW disclosed key results and limitations of Integrated Flight Test 1A to the National Missile Defense Joint Program Office by examining test reports submitted to the program office on August 13, 1997, August 22, 1997, and April 1, 1998, and by examining the December 11, 1997, briefing charts. We also held discussions with and examined various reports and documents prepared by Boeing North American, Anaheim, California; TRW Inc., Redondo Beach, California; the Raytheon Company, Tucson, Arizona; Nichols Research Corporation, Huntsville, Alabama; the Phase One Engineering Team, Washington, D.C.; the Massachusetts Institute of Technology/Lincoln Laboratory, Lexington, Massachusetts; the National Missile Defense Joint Program Office, Arlington, Virginia, and Huntsville, Alabama; the Office of the Director, Operational Test and Evaluation, Washington D.C.; the U.S. Army Space and Missile Defense Command, Huntsville, Alabama; the Defense Criminal Investigative Service, Mission Viejo, California, and Arlington, Virginia; and the Institute for Defense Analyses, Alexandria, Virginia. We held discussions with and examined documents prepared by Dr. Theodore Postol, Massachusetts Institute of Technology, Cambridge, Massachusetts; Dr. Nira Schwartz, Torrance, California; and Mr. Roy Danchick, Santa Monica, California. In addition, we hired the Utah State University Space Dynamics Laboratory, Logan, Utah, to examine the performance of the Boeing sensor because we needed to determine the effect the higher operating temperature had on the sensor’s performance. As agreed with your offices, we did not replicate TRW’s assessment of its software using target signals that the Boeing sensor collected during the test. This would have required us to make engineers and computers available to verify TRW’s software, format raw target signals for input into the software, develop reference data, and run the data through the software. We did not have these resources available, and we, therefore, cannot attest to the accuracy of TRW’s discrimination claims. We examined the methodology, key results, and limitations of evaluations completed by Nichols Research Corporation and the Phase One Engineering Team by analyzing Nichols’ report on TRW’s discrimination software dated December 1997; Nichols’ reports on the Extended Kalman Filter dated November 1996 and November 1998; and the Phase One Engineering Team’s “Independent Review of TRW Discrimination Techniques” dated January 1999. In addition, we held discussions with the Nichols engineers and Phase One Engineering Team members that conducted the assessments and with officials from the National Missile Defense Joint Program Office. We did not replicate the evaluations conducted by Nichols and the Phase One Engineering Team and cannot attest to the accuracy of their reports. We examined the basis for the Department of Justice’s decision not to intervene in the False Claims lawsuit by holding discussions with and examining documents prepared by the Department of Justice, Washington, D.C. We also held discussions with and reviewed documents at the U.S. Army Legal Services Agency, Arlington, Virginia, and the U.S. Army Space and Missile Defense Command, Huntsville, Alabama. We reviewed the National Missile Defense Joint Program Office’s efforts to address potential conflicts of interest that an expert panel might have in reviewing the results of Integrated Flight Test 1A by holding discussions with National Missile Defense Joint Program Office officials and with members of the expert panel, known as the Phase One Engineering Team. We also examined the federal regulations and support agreements agreed to by the Federally Funded Research and Development Centers and national laboratory that employed the panel members. Last, as you requested, we discussed alternative methods of establishing an expert panel with the American Physical Society, Ridge, New York; and the National Academy of Sciences’ National Research Council, Washington, D.C. Our work was conducted from August 2000 through February 2002 according to generally accepted government auditing standards. The length of time the National Missile Defense Joint Program Office required to release documents to us significantly slowed our review. For example, the program office required approximately 4 months to release key documents such as Nichols 1997 evaluation of TRW’s discrimination software and Nichols 1996 and 1998 evaluations of the Extended Kalman Filter Feature Extractor. We requested these and other documents on September 14, 2000, and received them on January 9, 2001.
The Department of Defense (DOD) awarded contracts to three companies in 1990 to develop and test exoatmospheric kill vehicles. One of the contractors--Boeing North American--subcontracted with TRW to develop software for the kill vehicle. In 1998, Boeing became the Lead System Integrator for the National Missile Defense Program and chose Raytheon as the primary kill vehicle developer. Boeing and TRW reported that the June 1997 flight test achieved its primary objectives but detected some sensor abnormalities. The project office relied on Boeing to oversee the performance of TRW. Boeing and TRW reported that deployed target objects displayed distinguishable features when being observed by an infrared sensor. After considerable debate, the program manager reduced the number of decoys planned for intercept flight tests in response to a recommendation by an independent panel. The Phase One Engineering Team, which was responsible for completing an assessment of TRW's software performance within two months using available data, found that although the software had weaknesses, it was well designed and worked properly, with only some changes needed to increase the robustness of the discrimination function. On the basis of that analysis, team members predicted that the software would perform successfully in a future intercept test if target objects deployed as expected.
SNAP is jointly administered by FNS and the states. FNS pays the full cost of SNAP benefits, shares the states’ administrative costs, and is responsible for promulgating program regulations and ensuring that state officials administer the program in compliance with program rules. States administer the program by determining whether households meet the program’s eligibility requirements, calculating monthly benefits for qualified households, and issuing benefits to participants through an Electronic Benefits Transfer (EBT) system. As shown in figure 1, program participation has increased sharply from fiscal years 1999 to 2009, and indications are that participation has continued to increase significantly in fiscal year 2010. According to FNS, the downturn in the U.S. economy, coupled with changes in the program’s rules and administration, has led to an increase in the number of SNAP participants. Eligibility for SNAP is based primarily on a household’s income and assets. To determine a household’s eligibility, a caseworker must first determine the household’s gross income, which cannot exceed 130 percent of the federal poverty level for that year as determined by the Department of Health and Human Services. A household’s net income cannot exceed 100 percent of the poverty level (or about $22,056 annually for a family of four living in the continental United States in fiscal year 2010). Net income is determined by deducting from gross income a portion of expenses such as dependent care costs, medical expenses for elderly individuals, utilities costs, and housing expenses. A household’s assets are also considered to determine SNAP eligibility and SNAP asset rules are complex. There is a fixed limit, adjusted annually for inflation, on the amount of assets a household may own and remain eligible for SNAP. Certain assets are not counted, such as a home and surrounding lot. There are also basic program rules that limit the value of vehicles an applicant can own and still be eligible for the program. Federal regulations require states to make households categorically eligible for SNAP if the household receives certain cash benefits, such as TANF cash assistance or Supplemental Security Income. States must also confer categorical eligibility for certain households receiving, or authorized to receive, certain TANF non-cash services that are funded with more than 50 percent federal or state maintenance of effort (MOE) funds and serve certain TANF purposes. In addition, in certain circumstances, states have the option to confer categorical eligibility using TANF non-cash services funded with less than 50 percent federal TANF or state MOE funds. The intent of categorical eligibility was to increase program access and reduce the administrative burden on state agencies by streamlining the need to apply means tests for both TANF and SNAP. Improper payments (or payment errors) occur when recipients receive too much or too little in SNAP benefits. FNS and the states share responsibility for implementing an extensive quality control system used to measure the accuracy of SNAP payments and from which state and national error rates are determined. Under FNS’s quality control system, the states calculate their payment errors annually by drawing a statistical sample to determine whether participating households received the correct benefit amount. The state’s error rate is determined by dividing the dollars paid in error by the state’s total issuance of SNAP benefits. Once the error rates are final, FNS is required to compare each state’s performance with the national error rate and imposes financial penalties or provides financial incentives according to legal specifications. Trafficking occurs when SNAP recipients exchange SNAP benefits for cash instead of food with authorized retailers. Under the EBT system, SNAP recipients receive an EBT card imprinted with their name and a personal account number, and SNAP benefits are automatically credited to the recipients’ accounts once a month. In legitimate SNAP transactions, recipients run their EBT card, which works much like a debit card, through an electronic point-of-sale machine at the grocery checkout counter, and enter their secret personal identification number to access their SNAP accounts. This authorizes the transfer of SNAP benefits from a federal account to the retailer’s account to pay for the eligible food items. The legitimate transaction contrasts with a trafficking transaction in which recipients swipe their EBT card, but instead of buying groceries, they receive a discounted amount of cash and the retailer pockets the difference. FNS has the primary responsibility for authorizing retailers to participate in SNAP. To become an authorized retailer, a store must offer, on a continuing basis, at least three varieties of foods in each of the four staple food categories—meats, poultry or fish; breads or cereals; vegetables or fruits; and dairy products—or over 50 percent of its sales must be in a staple group. The store owner submits an application and includes relevant forms of identification such as copies of the owner’s Social Security card, driver’s license, business license, liquor license, and alien resident card. The FNS field office program specialist then checks the applicant’s Social Security number against FNS’s database of retailers, the Store Tracking and Redemption System, to see if the applicant has previously been sanctioned in the SNAP program. The application also collects information on the type of business, store hours, number of employees, number of cash registers, the types of staple foods offered, and the estimated annual amount of gross sales and eligible SNAP sales. In addition to approving retailers to participate in the program, FNS has the primary responsibility for monitoring their compliance with requirements and administratively disqualifying those who are found to have trafficked SNAP benefits. FNS headquarters officials collect and monitor EBT transaction data to detect suspicious patterns of transactions by retailers. They then send any leads to FNS program specialists in the field office who either work the cases themselves or refer them to undercover investigators in the Retailer Investigations Branch to pursue by attempting to traffic SNAP benefits for cash. The national payment error rate — the percentage of SNAP benefit dollars overpaid or underpaid to program participants—has declined by about 56 percent over the last 11 years, from 9.86 percent in 1999 to 4.36 percent in 2009, in a time of increasing participation (see figure 1). Of the total $2.19 billion in payment errors in fiscal year 2009, $1.8 billion, or about 82 percent, were overpayments. Overpayments occur when eligible persons are provided more than they are entitled to receive or when ineligible persons are provided benefits. Underpayments, which occur when eligible persons are paid less than they are entitled to receive, totaled $412 million, or about 18 percent of dollars paid in error, in fiscal year 2009. The decline in payment error rates has been widespread despite the significant increase in participation. Error rates fell in almost all states, and 36 states reduced their error rates by over 50 percent from fiscal years 1999 to 2009. In addition, 47 states had error rates below 6 percent in 2009; this is an improvement from 1999, when 7 states had error rates below 6 percent. However, payment error rates vary among states. Despite the decrease in many states’ error rates, a few states continue to have high payment error rates. State use of simplified reporting options has been shown to have contributed to the reduction in the payment error rate. Several options are made available to the states to simplify the application and reporting process, and one such option is simplified reporting. Of the 50 states currently using simplified reporting, 47 have expanded it beyond earned income households, according to a recent FNS report. Once a state has elected to use simplified reporting, eligible households in the state need only report changes occurring between certification and normally scheduled reporting if the changes result in income that exceeds 130 percent of the federal poverty level. This simplified reporting option can reduce a state’s error rate by minimizing the number of income changes that must be reported between certifications and thereby reducing errors associated with caseworker failure to act, as well as participant failure to report changes. Despite these simplified reporting options, program eligibility requirements remain complex. This complexity increases the risk that caseworkers will make errors when considering all the factors needed to determine eligibility. Our previous work has shown that the financial eligibility of an applicant can be difficult to verify in means-tested programs, further increasing the risk of payment to an ineligible recipient. For example, caseworkers must verify several types of household assets to determine eligibility and benefit amounts, such as bank accounts, property, and vehicles. While additional efforts to simplify the program may further reduce payment error, it could also reduce FNS’ ability to target the program to individual families’ needs. Moreover, participant- caused errors, which we earlier reported constitute one-third of the overall national errors, are difficult to prevent. We found that FNS and the states we reviewed have taken many approaches to increasing SNAP payment accuracy, most of which are consistent with internal control practices known to reduce improper payments. Often, several practices are tried simultaneously, making it difficult to determine which have been the most effective. Tracking state performance. FNS staff use Quality Control (QC) data to monitor states’ performance over time; conduct annual reviews of state operations; and where applicable, monitor the states’ implementation of corrective action plans. FNS, in turn, requires states to perform management evaluations to monitor whether adequate corrective action plans are in place at local offices to address the causes of persistent errors and deficiencies. In addition, in November 2003, FNS created a Payment Accuracy Branch at the national level to work with FNS regional offices to suggest policy and program changes and to monitor state performance. The branch facilitates a National Payment Accuracy Work Group with representatives from each FNS regional office and headquarters who use QC data to review and categorize state performance into one of three tiers. Increased intervention and monitoring approaches are applied when state error rates increase and states are assigned to tier 2 or tier 3. Penalties and incentives. FNS has long focused its attention on states’ accountability for error rates through its QC system by assessing financial penalties and providing financial incentives. However, since 2000, USDA leadership has more explicitly established payment accuracy as a program priority. High level USDA officials visited states with particularly high error rates, and FNS has collected a higher percentage of penalties from states compared with prior years. For example, from fiscal year 1992 to 2000, FNS collected about $800,000 in penalties from states. In the next 5 years, FNS collected more than $20 million from states. In fiscal year 2009, 3 states (Maine, West Virginia, and New Mexico) were notified that they had incurred a financial liability for having a poor payment error rate for at least two consecutive years. An additional 9 states and territories (Connecticut, Maryland, Indiana, Wisconsin, Louisiana, Texas, Iowa, Alaska, and Guam) were found to be in jeopardy of being penalized if their error rates do not improve. Ten states and territories received bonus payments for the best and most improved payment error rates in fiscal year 2009 (Delaware, Florida, Georgia, Guam, Maine, Nebraska, Ohio, South Dakota, Washington, Wisconsin). Information sharing. FNS also provides and facilitates the exchange of information gleaned from monitoring by training state QC staff, presenting at conferences, publishing best practice guides, supporting the adoption of program simplification options, and providing states policy interpretation and guidance. At the time of our 2005 study, states we reviewed adopted a combination of practices to prevent, minimize, and address payment accuracy problems, such as: Increasing the awareness of, and the accountability for, payment error. For example, some states set error rate targets for their local offices and hold staff accountable for payment accuracy. Analyzing quality control data to identify causes of common payment errors and developing corrective actions. Making automated system changes to prompt workers to obtain complete documentation from clients. Developing specialized change units that focus on acting upon reported case changes. Verifying the accuracy of benefit payments calculated by state SNAP workers through supervisory and other types of case file reviews. Despite this progress, the amount of SNAP benefits paid in error is substantial, totaling about $2.2 billion in 2009. This necessitates continued top-level attention from USDA management and continued federal and state commitment to determining the causes of improper payments and taking corrective actions to reduce them. The national rate of SNAP trafficking declined from about 3.8 cents per dollar of benefits redeemed in 1993 to about 1.0 cent per dollar during the years 2002 to 2005, as shown in table 1. However, even at that lower rate, FNS estimates that about $241 million in SNAP benefits were trafficked annually in those years. FNS has not completed an updated estimate of trafficking since 2005. Overall, we found that the estimated rate of trafficking at small stores was much higher than the estimated rate for supermarkets and large groceries, which redeem most SNAP benefits. The rate of trafficking in small stores was an estimated 7.6 cents per dollar and an estimated 0.2 cents per dollar in large stores in 2005. With the implementation of EBT, FNS has supplemented its traditional undercover investigations by the Retailer Investigations Branch with cases developed by analyzing EBT transaction data. The nationwide implementation of EBT, completed in 2004, has given FNS powerful new tools to supplement its traditional undercover investigations of retailers suspected of trafficking SNAP benefits. FNS traditionally sent its investigators into stores numerous times over a period of months to attempt to traffic benefits. However, in 1996 Congress gave FNS the authority to charge retailers with trafficking in cases using evidence obtained through an EBT transaction report, called “paper cases.” A major advantage of paper cases is that they can be prepared relatively quickly and without multiple store visits. These EBT cases now account for more than half of the permanent disqualifications by FNS. Although the number of trafficking disqualifications based on undercover investigations has declined, these investigations continue to play a key role in combating trafficking. However, as FNS’s ability to detect trafficking has improved, the number of suspected traffickers investigated by other federal entities, such as the USDA Inspector General and the U.S. Secret Service, declined, according to data available at the time of our review. These entities have focused more on a smaller number of high-impact investigations. As a result, retailers who traffic are less likely to face criminal penalties or prosecution. In response to our prior recommendation that FNS improves analysis and monitoring, FNS has implemented new technology to improve its ability to detect trafficking and disqualify retailers who traffic, which has contributed to more sophisticated analyses of SNAP transactions and categorization of stores based on risk. Specifically, FNS implemented a revised store classification system to systematically compare similar stores in order to better identify fraudulent transaction activity for investigation. FNS also increased the amount of data available to review and changed its monitoring of transaction data from reviewing monthly data to reviewing these data on a daily basis. FNS also implemented a new tool that assesses each retailer’s risk of trafficking. FNS reports that these changes have assisted with early monitoring and identification of violating stores and allocation of its monitoring resources. Consistent with our recommendation that FNS develop a strategy to increase penalties for trafficking, FNS received new authority to impose increased financial penalties for trafficking. The Food, Conservation, and Energy Act of 2008 expanded FNS authority to assess civil money penalties in addition to or in lieu of disqualification. It also provided authority for FNS, in consultation with the Office of the Inspector General, to withhold funds from traffickers during the administrative process, if such trafficking is considered a flagrant violation. Regulations to implement this provision are being developed and FNS expects the proposed rule to be published in July 2012. According to FNS, the rule that will address addition of monetary sanctions to disqualification is targeted for publication in September 2011. Until the policy is implemented, the impact of this change will not be known. Despite the progress FNS has made in combating retailer trafficking, the SNAP program remains vulnerable. Program vulnerabilities we identified include: Limited inspection of stores. FNS authorizes some stores with limited food supplies so that low-income participants in areas with few supermarkets have access to food, but may not inspect these stores again for 5 years unless there is some indication of a problem. Varied state efforts. Some states actively pursue and disqualify recipients who traffic their benefits while inaction by other states allow recipients suspected of trafficking to continue the practice. We recommended in our October 2006 report that FNS promote state efforts to pursue recipients suspected of trafficking by revisiting the incentive structure to incorporate additional provisions to encourage states to investigate and take action against recipients who traffic. We also recommended that FNS ensure that field offices report to states those recipients who are suspected of trafficking with disqualified retailers. However, FNS officials told us they have taken few recent steps to increase state efforts to pursue recipients suspected of trafficking, in part because of state resource constraints, but will continue to examine the impact of financial incentives in preparation for the expected upcoming program reauthorization. States that confer TANF non-cash categorical eligibility use a variety of TANF services to qualify participants for SNAP benefits. According to FNS, as of June 2010, 36 states are using broad-based policies that could make most, if not all, TANF non-cash households categorically eligible for SNAP because the households receive TANF/MOE funded benefits, such as brochures or information referral services. This is an increase from the 29 states that conferred this type of categorical eligibility at the time of our 2007 report. Other states have more narrow policies in place that could make a smaller number of households categorically eligible for SNAP because they receive a TANF/MOE funded benefit such as child care or counseling. These categorically eligible households do not need to meet SNAP eligibility requirements such as the SNAP asset or gross income test because their general need has been established by the TANF program. For example, in 35 of the states that confer categorical eligibility for all TANF services, there is no limit on the amount of assets a household may have to be determined eligible, according to a FNS report. In addition, the gross income limit of the TANF program set by these states ranged from 130 to 200 percent of the federal poverty level, according to a FNS report. As a result, households with substantial assets but low income could be deemed eligible for SNAP under these policies. Even though households may be deemed categorically eligible for SNAP, the amount of assistance households are eligible for is determined based on each household’s income and other circumstances using the same process used for other SNAP recipients. Some families determined categorically eligible for the program could be found eligible for the minimum benefit. However, FNS noted in a recent report that families with incomes above 130 percent of the federal poverty level and high expenses (shelter costs, dependent care expenses, and medical costs) could receive a significant SNAP benefit. Households can be categorically eligible for SNAP even if they receive no TANF funded service other than a toll-free telephone number or informational brochure. For example, one state reported to FNS that it included information about a pregnancy prevention hotline on the SNAP application to confer categorical eligibility. Other states reported providing households brochures with information about available services, such as domestic violence assistance or marriage classes, to confer categorical eligibility. Receipt of the information on the SNAP applications or on the brochures can qualify the household to be categorically eligible for SNAP benefits. However, the amount of the SNAP benefit is still determined in accordance with SNAP rules by the eligibility workers using information on income and expenses. In 2007, we reported that six states may not have been following program regulations because they were not using certain TANF noncash services to confer SNAP categorical eligibility. These services included child care, transportation, and substance abuse services, which may have been funded by more than 50 percent federal TANF or state MOE funds. In addition, some states reported that they did not specifically determine whether an individual needs a specific TANF noncash service before conferring SNAP eligibility. We recommended that FNS provide guidance and technical assistance to states clarifying which TANF noncash services states must use to confer categorical eligibility for SNAP and monitor states’ compliance with categorical eligibility requirements. In September 2009, USDA released a memorandum encouraging states to continue promoting noncash categorical eligibility. FNS reported that four of the six states currently are using the required noncash services to confer categorical eligibility. FNS has encouraged states to adopt categorical eligibility to improve program access and simplify the administration of SNAP. According to FNS officials, increased use of categorical eligibility by states has reduced administrative burdens and increased access to SNAP benefits to households who would not otherwise be eligible for the program due to SNAP income or asset limits. Adoption of this policy option can provide needed assistance to low-income families, simplify state policies, reduce the amount of time states must devote to verifying assets, and reduce the potential for errors, according to FNS. FNS recently also encouraged states that have implemented a broad-based categorical eligibility program with an asset limit to exclude refundable tax credits from consideration as assets. In our previous work, we found that many of the states’ SNAP officials surveyed believed eliminating TANF non-cash categorical eligibility would decrease participation in SNAP. Many of the states’ SNAP officials we surveyed also believed that eliminating TANF non-cash categorical eligibility would increase the SNAP administrative workload and state administrative costs. Some common reasons state officials indicated for the increase in SNAP administrative workload were: increase in verifications needed, increase in error rates as required verifications increase, changes to data systems, increase in time to process applications, and changes to policies and related materials. While FNS and the states believe categorical eligibility has improved program access and payment accuracy, the extent of its impact on access and program integrity is unclear. Over the past few years, the size of the Supplemental Nutrition Assistance Program has grown substantially, both in terms of the number of people served and the amount paid out in benefits, at a time when the slow pace of the economic recovery has left many families facing extended hardship. At the same time, due largely to the efforts of FNS working with the states, payment errors have declined and mechanisms for detecting and reducing trafficking have improved. However, little is known about the extent to which increased use of categorical eligibility has affected the integrity of the program. Further, improper payments in the program continue to exceed $2 billion and retailer fraud remains a serious concern, highlighting the importance of continued vigilance in ensuring that improvements in program access are appropriately balanced with efforts to maintain program integrity. As current fiscal stress and looming deficits continue to limit the amount of assistance available to needy families, it is more important than ever that scarce federal resources are targeted to those who are most in need and that the federal government ensure that every federal dollar is spent as intended. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or members of the Subcommittee may have. For future contacts regarding this testimony, please contact Kay Brown at (202) 512-7215 or e-mail brownke@gao.gov. Key contributors to this testimony were Kathy Larin, Cathy Roark, and Alex Galuten. Domestic Food Assistance: Complex System Benefits Millions, but Additional Efforts Could Address Potential Inefficiency and Overlap among Smaller Programs. GAO-10-346. Washington, D.C.: April 15, 2010. Improper Payments: Progress Made but Challenges Remain in Estimating and Reducing Improper Payments. GAO-09-628T. Washington, D.C.: April 22, 2009. Food Stamp Program: FNS Could Improve Guidance and Monitoring to Help Ensure Appropriate Use of Noncash Categorical Eligibility. GAO-07-465. Washington, D.C.: March 28, 2007. Food Stamp Program: Payment Errors and Trafficking Have Declined despite Increased Program Participation. GAO-07-422T. Washington, D.C.: January 31, 2007. Food Stamp Trafficking: FNS Could Enhance Program Integrity by Better Targeting Stores Likely to Traffic and Increasing Penalties. GAO-07-53. Washington, D.C.: October 13, 2006. Improper Payments: Federal and State Coordination Needed to Report National Improper Payment Estimates on Federal Programs. GAO-06-347. Washington, D.C.: April 14, 2006. Food Stamp Program: States Have Made Progress Reducing Payment Errors, and Further Challenges Remain. GAO-05-245. Washington, D.C.: May 5, 2005. Food Stamp Program: Farm Bill Options Ease Administrative Burden, but Opportunities Exist to Streamline Participant Reporting Rules among Programs. GAO-04-916. Washington, D.C.: September 16, 2004. Food Stamp Program: States Have Made Progress Reducing Payment Errors, and Further Challenges Remain. GAO-05-245. Washington, D.C.: May 5, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.S. Department of Agriculture's (USDA) Supplemental Nutrition Assistance Program (SNAP) is intended to help low-income individuals and families obtain a better diet by supplementing their income with benefits to purchase food. USDA's Food and Nutrition Service (FNS) and the states jointly implement SNAP. Participation in the program has risen steadily over the last decade to an all time high of more than 33 million in fiscal year 2009, providing critical assistance to families in need. This testimony discusses GAO's past work on three issues related to ensuring integrity of the program: (1) improper payments to SNAP participants, (2) trafficking of SNAP benefits, and (3) categorical eligibility for certain individuals or households. This testimony is based on prior GAO reports on categorical eligibility (GAO-07-465), payment errors (GAO-05-245), and food stamp trafficking (GAO-07-53), developed through data analyses, case file reviews, site visits, interviews with officials, and a 50- state survey. GAO also updated data where available and collected information on recent USDA actions and policy changes. The national payment error rate reported for SNAP, which combines states' overpayments and underpayments to program participants, has declined by 56 percent from 1999 to 2009, from 9.86 percent to a record low of 4.36 percent. This reduction is due, in part, to options made available to states that simplified certain program rules. In addition, FNS and the states GAO reviewed have taken several steps to improve SNAP payment accuracy that are consistent with internal control practices known to reduce improper payments such as providing financial incentives and penalties based on performance. Despite this progress, the amount of SNAP benefits paid in error is substantial, totaling about $2.2 billion in 2009 and necessitating continued top-level attention and commitment to determining the causes of improper payments and taking corrective actions to reduce them. FNS estimates indicate that the national rate of food stamp trafficking declined from about 3.8 cents per dollar of benefits redeemed in 1993 to about 1.0 cent per dollar during the years 2002 to 2005 but that trafficking occurs more frequently in smaller stores. FNS has taken advantage of electronic benefit transfer to reduce fraud, and in response to prior GAO recommendations, has implemented new technology and categorized stores based on risk to improve its ability to detect trafficking and disqualify retailers who traffic. FNS also received authority to impose increased financial penalties for trafficking as recommended; however, it has not yet assessed higher penalties because implementing regulations are not yet finalized. FNS is considering additional steps to encourage states to pursue recipients suspected of trafficking but limited state resources are a constraint. Categorically eligible households do not need to meet SNAP eligibility requirements because their need has been established under the states' Temporary Assistance for Needy Families (TANF) program. As of June 2010, 36 states have opted to provide categorical eligibility for SNAP to any household found eligible for a service funded through TANF and, in 35 states, there is no limit on the amount of assets certain households may have to be determined eligible, according to FNS. Households can be categorically eligible for SNAP even if they receive no TANF funded service other than a toll-free telephone number or informational brochure. However, the amount of assistance eligible households receive is determined using the same process used for other SNAP recipients. According to FNS officials, increased use of categorical eligibility by states has reduced administrative burdens and increased access to SNAP benefits to households who would not otherwise be eligible due to asset or income limits. However, little is known about the extent of its impact on increased access or program integrity. SNAP has played a key role in assisting families facing hardship during the economic crisis, but given fiscal constraints and program growth, it is more important than ever to understand the impact of policy changes, and balance improvements in access with efforts to ensure accountability. FNS generally agreed with GAO's prior recommendations to address SNAP trafficking and categorical eligibility issues and has taken action in response to most of them.
An effective communications infrastructure, including voice and data networks, is essential to our ability as a nation to maintain public health and safety during a catastrophic natural disaster, such as a hurricane, or a man-made event, such as a terrorist attack. Technological advances in these networks have led to a convergence of the previously separate networks used to transmit voice and data communications. These new networks—next generation networks—are capable of transmitting both voice and data on a single network and eventually will be the primary means for voice and data transmissions. Converged voice and data networks have many benefits. For example, these networks use technology based on packet switching, which allows greater resiliency. Packet switching involves breaking a message into packets, or small chunks of data, and transferring the packets across a network to a destination where they are recombined. The resiliency of using a packet-switching network is due to the packet’s ability to be transmitted over multiple routes, avoiding areas that may be congested or damaged. Conversely, conventional voice services use traditional telephone networks, which are based on circuit switching technology. Instead of breaking a message up into packets, circuit-switching uses a dedicated channel to transmit the voice communication. Once all of the channels are occupied, no further connections can be made until a channel becomes available. Figure 1 shows a comparison between packet switching and circuit switching. Converged networks, however, also pose certain technical challenges. For example, current national programs to provide priority voice services in an emergency are based primarily on voice or traditional telephone networks, which are circuit-switched. Implementing these networks on packet- switched networks is difficult because there is no uniformly accepted standard for providing priority service on a packet-switched network. Also, the Internet-based protocols used on packet-switched networks have vulnerabilities and in certain cases, packet-switched networks may be unreliable for emergency communications due to delays in transmission and loss of packets. Federal policies call for the protection of essential public and private infrastructures, such as the electric power grid, chemical plants, and water treatment facilities that control the vital functions critical to ensuring our national economic security and public health and safety. These infrastructures, called critical infrastructures, also include communications infrastructure, such as voice and data communication networks. Federal policies also designate certain federal agencies as lead points of contact for each key critical infrastructure sector and assign responsibility for infrastructure protection activities and for coordination with other relevant federal agencies, state and local governments, and the private sector. (See app. II for a description of the sectors and the designated federal agencies.) DHS is the lead federal agency for both the telecommunications and information technology (IT) sectors. DHS is also designated as the focal point for the security of cyberspace—including analysis, warning, information sharing, vulnerability reduction, mitigation, and recovery efforts for public and private critical infrastructure information systems. As part of its responsibilities, DHS created the National Infrastructure Protection Plan to coordinate the protection efforts of critical infrastructures. The plan recognizes the Internet as a key resource composed of assets within both the IT and the telecommunications sectors. It notes that the Internet is used by all critical infrastructure sectors to varying degrees and that it provides information and communications to meet the needs of businesses, government, and the other sectors. The National Infrastructure Protection Plan requires lead federal agencies for the critical infrastructure sectors to work with public and private sector stakeholders to develop sector-specific plans that address how the sectors’ stakeholders will improve the security of their assets, systems, networks, and functions. We recently reported on how comprehensively these sector-specific plans address the cyber security aspects of their sectors, including the plans for the IT and telecommunications sectors. We found that the plans varied in how sector stakeholders identified their cyber risks and developed plans to identify, respond to, and recover from a cyber attack. Accordingly, we recommended specific measures to help DHS strengthen the development, uniformity, and use of the plans. Federal policies provide DHS the lead responsibility for facilitating a public-private response to disruptions to major communications infrastructure, such as voice and data networks. Within DHS, the responsibility is assigned to NCSD and NCS in the Office of Cyber Security and Communications. NCSD has responsibility for the security of data and applications and executes this duty via its operations center—US-CERT— while NCS has responsibility for the communications infrastructure that carries data and applications and carries out its duty through its coordination center, NCC, and its operations center, NCC Watch. In June 2003, DHS created NCSD to serve as the national focal point for addressing cyber security issues. NCSD’s mission is to secure cyberspace and America’s cyber assets in cooperation with public, private, and international entities. The division carries out its mission via its US-CERT operations center, which is responsible for, among other things, analyzing and addressing cyber threats and vulnerabilities and disseminating cyber- threat warning information. In the event of a security issue or disruption affecting data and applications, US-CERT is to facilitate coordination of recovery activities with the network and security operations centers of owners and operators of these networks and with government officials (e.g., incident response teams) responsible for protecting government networks. NCSD is the government lead on a public/private partnership supporting US-CERT and serves as the lead for the federal government’s cyber incident response through the National Cyber Response Coordination Group. This group is the principal federal interagency mechanism for coordinating the preparation for and response to significant cyber incidents, such as a major Internet disruption, and includes members from 19 federal departments and agencies. NCS is responsible for ensuring that communications infrastructure used by the federal government is available under all conditions—ranging from normal situations to national emergencies and international crises. The system does this through several activities, including a program that gives calling priority to federal executives, first responders, and other key officials in times of emergency. NCS was established by presidential direction in August 1963 in response to voice communication failures associated with the Cuban Missile Crisis. Its role was further clarified through an executive order issued in April 1984 that established the Secretary of Defense as the executive agent for NCS. In 2003, it was transferred to the responsibility of the Secretary of DHS. NCS is composed of members from 24 federal departments and agencies. Although it originally focused on “traditional” voice services via common carriers, NCS has now taken a larger role in Internet-related issues due to the convergence of voice and data networks. For example, it now helps manage issues related to disruptions of the Internet backbone (e.g., high- capacity data routes). NCC, which serves as the coordination component of NCS, is the point of contact with the private sector on issues that could affect the availability of the communications infrastructure. According to DHS, the center includes 47 members from major telecommunications organizations, such as Verizon and AT&T. These members represent 95 percent of the wireless and wire line telecommunications service providers and 90 percent of the Internet service provider backbone networks. During a major disruption in telecommunications services, NCC Watch is to coordinate with NCC members in an effort to restore service as soon as possible. In the event of a major Internet disruption, it is to assist recovery efforts through its partnerships and collaboration with telecommunications and Internet-related companies. Using these partnerships, NCC has also created several programs that, in times of emergency, provide calling priority in to enable first responders and key officials at all levels to communicate using both landline phones and cellular devices. Since February 2002, we, along with federal government and private sector experts, have examined the convergence of voice and data networks into next generation networks. All these experts recommend that federal agencies such as DHS adopt an integrated approach—including integrating their organizations—to planning for and responding to network disruptions. In February 2002, before the formation of DHS, a White House advisory group recommended that the federal government develop such an approach. Specifically, it found that timely information sharing was essential to effective incident response, that existing coordination within the government was ineffective and needed senior management attention, and that NCS should broaden its capabilities to include more IT industry expertise. In March 2006, the National Security Telecommunications Advisory Committee, a presidential advisory group, also recommended that DHS develop an integrated approach to incident response on next generation networks and update priority communications programs to improve existing recovery abilities. The committee recommended that DHS establish an inclusive and effective incident response capability that includes functions of the NCC and a broadened membership, including firms in the IT sector. The committee also stated that most new communications providers are not members of the NCC, were not easily accessible during an incident, and had not yet developed close working relationships with other industry stakeholders and the federal government. In June 2006, we recommended that DHS improve its approach to dealing with disruptions by examining the organizational structure of NCSD and NCS in light of the convergence of voice and data networks. We found that DHS had overlapping responsibilities for incident response, which affected the ability of DHS to prioritize and coordinate incident response activities. Furthermore, in December 2006, the Telecommunications and Information Technology Information Sharing and Analysis Centers, composed of representatives of private telecommunications and IT companies, sent a letter to DHS asking that the department develop a plan to integrate critical infrastructure protection efforts including planning for and responding to disruptions. In a January 2007 written response signed by the Assistant Secretary for Cyber Security and Communications, DHS agreed with the importance of this effort and stated that developing a road map for integration was a priority. Moreover, in April 2007, the two information sharing and analysis centers established a task force (referred to as a “tiger team” by DHS) with DHS that, among other things, identified overlapping responsibilities between NCC Watch and US-CERT in the following areas: developing and disseminating warnings, advisories, and other urgent evaluating the scope of an event; deploying response teams during an event; integrating cyber, communications, and emergency response exercises into operational plans and participation; and the management of relationships with others, such as industry partners. Consequently, the tiger team task force recommended merging the two centers to establish an integrated operations center and further recommended that DHS adopt a three-step approach to integration of the centers. The approach should include: moving NCC Watch to office space physically adjacent to US-CERT, developing an integrated operations center by merging US-CERT and NCC inviting private sector critical infrastructure officials to join this new center. In addition to these three steps, the task force also recommended specific actions to be taken in implementing them. For example, in developing an integrated operations center by merging NCC Watch and US-CERT, the task force recommended, among other things, that DHS (1) appoint a project manager to lead this effort; (2) develop policies and procedures that integrate operations and address overlapping responsibilities, including how the new center is to respond in an integrated manner to threats and incidents; and (3) establish performance measures to monitor progress. In addition, with regard to involving key private sector critical infrastructure officials in the new center, the task force recommended that the department also appoint a project manager to lead this effort. This effort would include seeking participation of appropriate private sector officials, identifying any potential legal issues to having these officials serve in the new center, and developing measures to monitor progress. In September 2007, DHS approved the report, accepting the recommendations and adopting the three-step approach. DHS has taken the first of three steps toward integrating NCSD and NCS by moving the two centers, NCC Watch and US-CERT, to adjacent office space in November 2007. This close proximity allows the approximately 41 coordination center and 95 readiness team analysts to, among other things, readily collaborate on planned and ongoing activities. In addition, the centers have jointly acquired common software tools to identify and share physical, telecommunications, and cyber information related to performing their missions. For example, the centers use one of the tools to develop a joint “morning report” specifying their respective security issues and problems, which is used by the analysts in coordinating responses to any resulting disruptions. While DHS has completed the first step, it has yet to implement the remaining two steps and supporting actions. Specifically, the department has not organizationally merged or integrated operation centers or completed any of the supporting actions. For example, the department has not hired a project manager, developed common operating procedures, or established progress measures. In addition, according to DHS officials, they have no efforts planned or underway to implement this step and associated actions. With regard to involving key private sector officials to participate in the proposed joint center, the department has not accomplished this step and supporting actions either. For example, it has not hired a project manager or sought participation of appropriate private sector officials to work at the new center. DHS officials told us they also have no efforts planned or underway to implement this step and its supporting actions. A key factor contributing to DHS’s lack of progress in implementing these steps is that completing the integration is not a top department priority. Instead, DHS officials stated that their efforts have been focused on other initiatives, most notably the President’s recently announced cyber initiative, which is a federal governmentwide effort to manage the risks associated with the Internet’s nonsecure external connections. Officials from DHS’s Office of Cyber Security and Communications stated that they are in the process of drafting a strategic plan to provide overall direction for the activities of NCS and NCSD, including completing the integration of the centers. However, the plan is in draft and has been so since mid-2007. In addition, DHS officials could not provide a date for when it would be finalized. Consequently, the department does not have a strategic plan or related guidance that provides overall direction in this area and has not developed specific tasks and milestones for achieving the remaining two integration steps. Until DHS completes the integration of these two centers, it risks being unable to efficiently plan for and respond to disruptions to communication infrastructure, including voice and data networks, and the information traveling on these networks, increasing the probability that communications will be unavailable or limited in times of need. While DHS has taken initial steps toward integrating the key centers that plan for and respond to disruptions to the communications infrastructure, including voice and data networks, and the data and applications on these networks, these offices are still not fully integrated as envisioned. Consequently, the risks associated with not having a fully integrated response to disruptions to the communications infrastructure remain. Effectively mitigating these risks will require swift completion of the integration. To do this will also require strong leadership to make the integration effort a department priority and to managing it accordingly, including completing the strategic plan and defining remaining integration tasks and milestones. To do less will continue to expose the nation’s communications networks to continuing risk of inadequate response to an incident. We are making two recommendations to the Secretary of Homeland Security to direct the Assistant Secretary for Cyber Security and Communications to Establish milestones for completing the development and implementation of the strategic plan for NCSD and NCS. Define specific tasks and associated milestones for establishing the integrated operations center through merging NCC Watch and US-CERT and inviting and engaging key private sector critical infrastructure officials from additional sectors to participate in the operations of the new integrated center. In written comments on a draft of this report (see app. III), signed by the Acting Director of DHS’s Departmental Liaison Office, the department concurred with our first recommendation and stated it is taking steps to implement it. Specifically, the department said that as part of its effort to develop and implement a strategic plan, it intends to take into consideration, among other things, the recommendations of the various expert groups that have studied issues confronting DHS in this area and the lessons learned from collocating the two centers. Further, the department stated that this strategic planning also is to provide for integrating the centers’ existing overlapping functions with the aim of increasing mission effectiveness. With regard to our second recommendation, DHS stated that, while it supports further integration of overlapping functions, it does not support organizationally merging the centers at this point and added that the lack of a merger will not impact its ability to respond to incidents. We do not agree. To the contrary, there is strong evidence that shows that DHS’s ability to respond is negatively impacted by the use of separate centers, rather than a single integrated and merged entity. Specifically, our past work has shown that overlapping responsibilities for incident response have adversely affected DHS’s ability to prioritize and coordinate incident response activities. For example, private-sector firms have reported that in responding to a critical incident, DHS made time-consuming and duplicative requests for information without identifying how this information would be beneficial in helping respond to the event. In addition, the DHS-commissioned expert task force on the subject recently reported that without an organizationally integrated center, the department will not have a comprehensive operating picture of the nation’s cyber and communications infrastructure and thus not be able to effectively implement activities necessary to prepare, protect, respond, and recover this infrastructure. Further, our interviews with private- sector cyber and communications infrastructure executives performed as part of this engagement found that they also favor a merged organization that includes broad industry participation. This evidence calls for DHS to take a closer look at the issue of whether to merge the centers. DHS also commented on the report’s description of the roles and responsibilities of NCC and US-CERT. Specifically, DHS noted that our original characterization of NCC as dealing with voice systems and US-CERT with data systems was not totally accurate. Instead, DHS offered that a more accurate distinction would be that NCC deals with communication infrastructure, including voice and data networks, and US-CERT deals with the security of systems and data using the networks, which DHS commonly refers to as cyber situational awareness and response. We agree with this comment and have incorporated it in the report where appropriate. In addition to its written response, the department also provided technical comments that we have incorporated in the report where appropriate. We will send copies of this report to interested congressional committees, the Secretary of Homeland Security, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions on matters discussed in this report, please contact David A. Powner at (202) 512-9286 or at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objective was to determine the status of Department of Homeland Security (DHS) efforts to integrate the activities of its National Cyber Security Division (NCSD) and National Communications System (NCS) in preparing for and responding to disruptions in converged voice and data networks. To accomplish this, we first analyzed pertinent laws, policies, and related DHS documentation (e.g., charters and mission statements) showing the responsibilities of NCSD and NCS, particularly with regard to the increasing convergence of voice and data networks. We also analyzed key studies on DHS’s approach to managing convergence. We did this to identify key findings and recommendations pertinent to our objective. In particular, we focused on the Industry-Government Tiger Team Report and Recommendations for a Cyber Security and Communications Joint Operations Center, which recommended establishing an integrated operations center. DHS adopted the recommendations as part of its three- step approach to establish such a capability by (1) moving the National Coordination Center (NCC) Watch to office space physically adjacent to the US Computer Emergency Readiness Team (US-CERT), (2) developing an integrated operations center by merging NCC Watch and US-CERT, and (3) inviting private sector critical infrastructure officials to participate in this new center. We also interviewed DHS and industry officials who served on the tiger team task force and developed the report findings and recommendations. To determine the status of DHS’s efforts to integrate the centers, we analyzed department progress against the three steps specified in DHS’s approach. We also obtained and analyzed plans and related documentation from DHS on its status in establishing an integrated operations center capability. In particular, we assessed department plans and related documentation on the status of collocating and merging the NCS and NCSD operations centers. In addition, we analyzed documentation on DHS’s status in inviting key private sector infrastructure officials to join the operations of the new center. We also interviewed relevant officials in these organizations, including the managers of the National Coordination Center and the U.S. Computer Emergency Readiness Team, the Director of NCS, and the Acting Director of NCSD, to get their perspectives and to validate our understanding of their efforts to date. We also interviewed private sector officials—including the Chair of the Communications Information Sharing and Analysis Center and the President and Vice President of the IT Information Sharing and Analysis Center—to obtain their perspectives on DHS’s progress in addressing convergence, including establishing the integrated center and to determine whether they had received DHS invitations to participate in the operation of the integrated center. Next, to identify gaps, we compared the state of DHS’s progress against the task force recommendations adopted by DHS as part of its three step approach to integration. When gaps were identified, we also interviewed responsible DHS officials to determine any causes and their impact. We conducted this performance audit in the Washington, D.C. metropolitan area from September 2007 to June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Gary Mountjoy (Assistant Director), Scott Borre, Camille Chaires, Neil Doherty, Vijay D’Souza, Nancy Glover, Lee McCracken, and Jeffrey Woodward made key contributions to this report.
Technological advances have led to an increasing convergence of previously separate networks used to transmit voice and data communications. While the benefits of this convergence are enormous, such interconnectivity also poses significant challenges to our nation's ability to respond to major disruptions. Two operations centers--managed by the Department of Homeland Security's (DHS) National Communications System and National Cyber Security Division--plan for and monitor disruptions on voice and data networks. In September 2007, a DHS expert task force made three recommendations toward establishing an integrated operations center that the department agreed to adopt. To determine the status of efforts to establish an integrated center, GAO reviewed documentation, interviewed relevant DHS and private sector officials, and reviewed laws and policies to identify DHS's responsibilities in addressing convergence. DHS has taken the first of three steps toward integrating its centers that are responsible for planning for, monitoring, and responding to disruptions to the communications infrastructure, including voice and data networks, and the security of data and applications that use these networks. Specifically, in November 2007, it moved the operations center for communications infrastructure (NCC Watch) to office space adjacent to the center for data and applications (US-CERT). This close proximity allows the approximately 41 coordination center and 95 readiness team analysts to, among other things, readily collaborate on planned and ongoing activities. In addition, the centers have jointly acquired common software tools to identify and share physical, telecommunications, and cyber information related to performing their missions. For example, the centers use one of the tools to develop a joint "morning report" specifying their respective network security issues and problems, which is used by the analysts in coordinating responses to any resulting disruptions. While DHS has completed the first integration step, it has yet to implement the remaining two steps. Specifically, although called for in the task force's recommendations, the department has not organizationally merged the two centers or invited key private sector critical infrastructure officials to participate in the planning, monitoring, and other activities of the proposed joint operations center. A key factor contributing to DHS's lack of progress in implementing the latter two steps is that completing the integration has not been a top DHS priority. Instead, DHS officials stated that their efforts have been focused on other initiatives, most notably the President's recently announced cyber initiative, which is a federal governmentwide effort to manage the risks associated with the Internet's nonsecure external connections. Nevertheless, DHS officials stated that they are in the process of drafting a strategic plan to provide overall direction for the activities of the National Communications System and the National Cyber Security Division. However, the plan is in draft and has been so since mid-2007. In addition, DHS officials could not provide a date for when it would be finalized. Consequently, the department does not have a strategic plan or related guidance that provides overall direction in this area and has not developed specific tasks and milestones for achieving the two remaining integration steps. Until DHS completes the integration of the two centers, it risks being unable to efficiently plan for and respond to disruptions to communications infrastructure and the data and applications that travel on this infrastructure, increasing the probability that communications will be unavailable or limited in times of need.
Over the last 20 years, both Congress and the executive branch have taken actions to improve federal customer service. On January 5, 1993, the Government Performance Results Act (GPRA) was enacted to, among other things, improve the effectiveness and efficiency of federal programs by establishing a system to set goals for program performance and to measure results. GPRA was also intended to address several broad purposes, including promoting a new focus on results, service quality, and customer satisfaction. Building on GPRA, Executive Order 12862, Setting Customer Service Standards, was issued on September 11, 1993. The order stated that all executive departments and agencies that “provide significant services directly to the public shall provide those services in a manner that seeks to meet the customer service standard established” which is “equal to the best in business.” It also stated that the departments and agencies shall take a number of actions related to this requirement. On March 22, 1995, a presidential memorandum on improving customer service was issued, which stated “or the first time, the Federal Government’s customers have been told what they have a right to expect when they ask for service.” The memorandum further stated that the government must be customer-driven and customer-focused and clarified expectations regarding agency actions, standards, and measurements. To mark the occasion of the fifth anniversary of President Clinton’s reinventing government initiative, another presidential memorandum was issued on March 3, 1998, which called for efforts to “engage customers in conversations about further improving Government service.” In 2010 we assessed how federal agencies were setting customer service standards, measuring results, reporting those results and using them to improve service. To do this work, we conducted a survey, based on the requirements of Executive Order 12862 and the related memorandums, of 13 services provided by 12 federal agencies. In addition, we examined steps the Office of Management and Budget (OMB) had taken to facilitate federal agency use of tools and practices to improve customer service. We found that although all of the services in our review had customer service standards, not all were made available in a way that would be easy for customers to find and access, or in some cases, the standards were not made available at all. In addition, we found that some agency officials thought the requirements of the Paperwork Reduction Act and its clearance process made obtaining customer input difficult. Since 2010, an additional law has been enacted and an executive order has been issued that affect federal agencies and how they provide customer service. GPRAMA significantly enhanced the requirements of GPRA requiring agencies to develop annual performance plans that include performance goals for an agency’s program activities and Under GPRAMA, these accompanying performance measures. performance goals should be in a quantifiable and measurable form to define the level of performance to be achieved for program activities each year. On April 27, 2011, Executive Order 13571, Streamlining Service Delivery and Improving Customer Service, was issued to strengthen customer service and require agencies to develop and publish a customer service plan, in consultation with OMB. On June 13, 2011, OMB issued guidance to agencies to assist in implementing the activities outlined in Executive Order 13571. As required by GPRAMA, in March 2014 OMB announced the creation of a new set of cross-agency priority (CAP) goals in the fiscal year 2015 budget, which included customer service as a CAP goal to further build upon the progress being made by individual agencies. OMB officials, the agency is committed to improving customer service government-wide and to getting agency leadership to focus on the issue. To that end, OMB identified a number of actions that support improved customer service (see text box). Cross-Agency Priority Goal: Customer Service To build upon the progress being made by individual agencies, the administration is taking action to deliver improved customer service across the federal enterprise. To accomplish this goal, the administration will streamline transactions, develop standards for high impact services, and utilize technology to improve the customer experience. According to OMB, CAP goals address the longstanding challenge of addressing problems that are government-wide and require active collaboration between multiple agencies. To establish these goals, OMB solicited nominations from federal agencies and several congressional committees. In addition, OMB identified Smarter IT Delivery as a CAP goal with the purpose of improving outcomes and customer satisfaction with federal services through smarter IT delivery and stronger agency accountability for success. year 2016 budget submissions.include the 5 departments in our review. According to the guidance, each department is to highlight a limited number of key activities, provide the requested funding levels for the activities and describe how the requests were informed by citizen feedback, and report on the activities’ projected contribution to improving the department’s mission and outcomes. These 15 departments and agencies Table 2 summarizes selected legislation, executive orders, and memorandums affecting federal customer service. Customer service standards (standards) should inform customers as to what they have a right to expect when they request services. The customer service executive orders, memorandums, and guidance do not include strict guidelines on how standards should be structured, allowing agencies to develop their standards based in part on their particular needs and mission. As a result, each of the agencies in our review had standards that varied in the amount and type of information included, reflecting differences in needs and mission. Because flexibility exists in how agencies create standards—rooted in the purposes of GPRA and GPRAMA, the customer service executive orders, memorandums, and guidance—we identified key elements of effective customer service standards that would allow agencies to better serve the needs of their customers by providing greater accountability, oversight, and transparency. Such key elements are themselves based on the purposes and some requirements of GPRA, GPRAMA, and executive orders: to set performance targets or goals, measure performance against the set targets or goals, and communicate such information to customers. Specifically, we assessed each of the agency’s standards against the following key elements: whether the standards (1) include targets or goals for performance; (2) include performance measures; and (3) are made easily publicly available. Although standards may vary from agency to agency based on need and mission, each agency’s standards should include the key elements in order to improve customer service moving forward. GAO, Managing for Results: Executive Branch Should More Fully Implement the GPRA Modernization Act to Address Pressing Governance Challenges, GAO-13-518 (Washington, D.C.: June 26, 2013). goals for customer service required by the GPRA.” The law defines a performance goal as a “target level of performance expressed as a tangible, measurable objective, against which actual achievement can be compared, including a goal expressed as a quantitative standard, value, or rate.” OMB provided further guidance that standards should, where possible, include targets for speed, quality/accuracy, and satisfaction. Although Executive Order 13571and OMB guidance state performance goals should be established “where appropriate” and targets “where possible,” without clearly defined goals for customer service, agencies are unable to effectively communicate their service intentions to customers. As a result, we identified performance goals as a key element for effective customer service standards. Standards that include performance targets or goals allow agencies to define, among other things, the level, quality, and timeliness of the service they provide to their customers. See GAO, Small Business Administration: Additional Guidance on Documenting Credit Elsewhere Decisions Could Improve 7(a) Program Oversight, GAO-09-228 (Washington, D.C.: Feb. 12, 2009) and Small Business Administration: Additional Measures Needed to Assess 7(a) Loan Program’s Performance, GAO-07-769 (Washington, D.C.: July 13, 2007). customer satisfaction as a measure.” Among other things, GPRAMA requires that each agency performance plan should “establish a balanced set of performance indicators to be used in measuring or assessing progress toward each performance goal, including, as appropriate, customer service, efficiency, output, and outcome indicators.” We have found that if agencies do not use performance measures and performance information to track progress toward goals, they may be at risk of failing to achieve their goals. We have also found that there has been little improvement in managers’ reported use of performance information or practices that could help promote this use. For example, since 1997 we have surveyed federal managers to determine the extent to which agencies are using performance information to improve agency results. In 1997, when we first administered the survey, approximately 32 percent of federal managers government-wide reported to a great or very great extent that they have performance measures that tell them whether or not they are satisfying their customers. In 2013, 16 years after the first survey, this measure increased by approximately 8 percentage points to 40 percent, a statistically significant difference but still less than a majority of managers reporting positively on having such measures. Easily publicly available. According to Executive Order 12862, agencies are to post their standards, while Executive Order 13571 requires agencies to post customer service metrics and best practices online. Most recently, OMB provided additional guidance that the standards be “easily accessible at the point of service and on the Internet” easily available information, customers may not know what to expect, when to expect it, or from whom. Memorandum, M-11-24, Appendix § 2 at 2. We conducted an Internet search on each agencies’ websites to determine whether or not customer service standards were easily available. None of the agencies in our review had standards that included all of the key elements (see table 3). Without all of the key elements present in their standards, agencies may not be able to inform customers, provide accountability, measure progress, or improve customer service. U.S. Customs and Border Protection (CBP) is the unified border agency within the Department of Homeland Security (DHS) charged with the management, control, and protection of our nation’s borders at and between the official ports of entry. CBP’s primary mission is to protect the American public against terrorists and weapons of terror from entering the country while fostering the nation’s economic security. CBP’s border security inspection of individuals processes travelers that present themselves for entry into the United States. According to CBP, over 362 million passengers, pedestrians, and crew were inspected in fiscal year 2013 in 328 distinct ports of entry. CBP’s border security inspection standards are made easily available to the public; however, its standards do not include performance targets or goals and, according to CBP officials, CBP does not measure performance against those standards (see table 4). CBP’s standards, its “Pledge to Travelers” are qualitative in nature, for example, “we pledge to treat you with courtesy, dignity, and respect,” and “we pledge to provide reasonable assistance due to delay or disability” (see text box). Customs and Border Protection’s (CBP) “Pledge to Travelers” for Border Security Inspections of Individuals We pledge to cordially greet and welcome you to the United States. We pledge to treat you with courtesy, dignity, and respect. We pledge to explain the CBP process to you. We pledge to have a supervisor listen to your comments. We pledge to accept and respond to your comments in written, verbal, or electronic form. We pledge to provide reasonable assistance due to delay or disability. According to CBP officials, the standards outline CBP’s service in ideal circumstances for law-abiding travelers; CBP officials stated that theirs is a law enforcement agency and as a result, although the pledge applies to all travelers, different actions are taken for those attempting to break the law. The standards did not include descriptions or otherwise define what “courtesy, dignity, and respect” or “reasonable assistance” meant to CBP. In addition, CBP standards do not include a performance goal or target and CBP officials told us that they do not have performance measures that directly link to their standards nor did they provide a reason why they do not link. As a result, CBP officials are unable to determine the extent to which the agency is meeting customer service needs based on their standards. Instead, agency officials told us they are able to use customer comment data to infer how well the agency performs regarding the pledge. According to CBP officials, in fiscal year 2013 agency-wide CBP received 2,285 comments specifically related to the “Pledge to Travelers” and CBP officer conduct. To date for fiscal year 2014, CBP received 1,339 customer comments. With this information, CBP officials said they are able to identify areas where improvements or additional officer training is needed such as providing clarification about nonimmigrant visas for temporary stays. However, without clearly stated performance goals or measures directly linked to those goals, CBP is unable to determine the extent to which the standards are being met agency-wide, identify areas to improve service, or strategies to close performance gaps. CBP was the only agency in our review that makes its standards easily available to the public. CBP posts its standards on the Internet on its webpage, as well as at points of service in entry ports, field offices, and headquarters, according to CBP officials (see figure 1). Forest Service’s mission is to sustain the health, diversity, and productivity of the nation’s forests and grasslands to meet the needs of present and future generations. Forest Service’s recreational facilities provide customers with passes and permits to experience the forests through such activities as hiking, camping, hunting, fishing, cross-country skiing, and snowmobiling. Forest Service’s recreational facilities standards do not include any key elements for improving customer service (see table 5). Forest Service’s standards are a part of the agency’s “National Quality Standards,” an internal document that contains employee performance requirements that employees need to meet as part of their performance ratings—for example, “garbage does not exceed the capacity of garbage containers” (see text box). The standards did not include additional descriptions such as frequency—for example, that the garbage should be emptied on a daily basis. Forest Service officials also told us that there were no performance measures directly linked to their employee operations and maintenance performance standards and that the results are self-reported by local supervisors on an annual basis. As a result, Forest Service is unable to determine the extent to which the standards are being met agency-wide, to identify areas to improve service, or to develop strategies to close performance gaps. Forest Service Customer Service Standards for Recreational Facilities Excerpt of Forest Service standards (see appendix II for the complete list of standards) Visitors are not exposed to human waste. Water, wastewater, and sewage treatment systems meet federal, state and local water quality regulations. Garbage does not exceed the capacity of garbage containers. Individual units and common areas are free of litter including domestic animal waste. Facilities are free of graffiti. Restrooms and garbage locations are free of objectionable odor. Constructed features are clean. Forest Service officials stated that the agency is finalizing changes to the standards (which were developed in 2005) based on employee performance and other current practices within the agency and expects to finalize those changes by the end of fiscal year 2014. However, according to Forest Service officials, these changes will not include performance goals or measures. In 2010, we found that Forest Service did not make its standards available to customers as officials felt the standards would not be helpful to the visitors who evaluate such things as cleanliness of rest rooms against their own standards and not those set forth by Forest Service. Forest Service officials stated that there has been no change since 2010 and the standards continue to be embedded in internal employee performance reviews and are not publicly available at all. With clearly defined performance goals and measures, Forest Service could more easily communicate its definition of cleanliness, as well as other services, to its customers. In addition, according to executive orders and guidance, standards were specifically intended to inform the public, irrespective of Forest Service’s position that its standards are not publicly available because they may not be helpful to visitors. Federal Student Aid’s (FSA) core mission is to ensure that all eligible individuals benefit from federal financial assistance—grants, work-study, and loans—for education beyond high school. FSA provides student loans under the Direct Loan Program, where the Department of Education is the lender. FSA’s customers include student and parent applicants, borrowers, and colleges and universities that disburse Direct Loans and other federal aid authorized under Title IV of the Higher Education Act of 1965 directly to eligible student borrowers. FSA provided the standards for the Common Origination and Disbursement (COD) system through which FSA disburses Direct Loan funds to participating Title IV school customers for our analysis. These standards included two key elements—standards that include performance targets or goals and performance measures that are directly linked to the goals. However, the standards are not made publicly available (see table 6). According to FSA officials, their standards for disbursing direct loans are requirements embedded within performance-based contracts that the service contractor must meet. FSA officials stated that each phase of the loan life cycle was governed by separate performance-based contracts and standards. FSA’s COD standards, which govern one of these life-cycle phases, include performance goals to be achieved for the service contractor. However, the language used in the contracts is specifically designed to lay out the technical requirements for the service provider and as a result, without technical knowledge it may be more difficult for individuals outside of the service industry, such as parents and students, to understand (see text box). Federal Student Aid Customer Service Standards for Student Loans under the Direct Loan Program Excerpt of Direct Loan Origination and Disbursement through the Common Origination and Disbursement System (COD) Standards (see appendix III for the complete list of standards) Received unprocessed batches from schools will be reviewed each business day. The Contractor will review and resolve unprocessed batches within 3 business days from identification. Availability of the COD Web site including all of the individual application and infrastructure components that result in availability of the application to the business excluding scheduled downtime, required processing outages and FSA provided technology service (e.g. telecommunications, networking). FSA has performance measures directly linked to the standards, including target time frames for achieving the standards that range from daily to monthly. FSA officials told us that the COD contract service provider collects and reviews the performance data on a daily basis and provides performance reports to FSA management monthly in its Customer Experience Dashboard. The dashboard provides the current performance data against historical data for the same period. For example, it provides the number of Free Application for Federal Student Aid applications received in 2013-2014 compared to the same time in 2011-2012. According to FSA officials, the performance data are used to determine the service provider’s level of performance. If the standards are not being achieved, remedies may be taken as outlined in the contract. According to FSA officials, the standards are viewed as the minimum level of performance expected. The officials said the performance metrics have not been adjusted in approximately 4 years, but the agency is discussing metric changes for contract renewal. The COD standards overall were developed through consultation between the agency and education subject matter experts and were last modified in 2006. In 2010, we found that FSA did not make its standards available to customers for the Direct Loan Program because they were not intended to inform the public. According to FSA officials, there has been no change since 2010 and its standards continue to be embedded within various service contracts and are not available to FSA’s customers. However, according to executive orders and guidance, standards were specifically intended to inform the public, irrespective of FSA’s position that its standards are for the service provider and are not intended to inform the public. As a result, FSA’s customers may not be fully aware of the services that are available. The National Park Service’s (NPS) mission is to preserve “unimpaired the natural and cultural resources and values of the national park system for the enjoyment, education, and inspiration of this and future generations.” According to NPS, its interpretive and educational services advance this mission by providing memorable educational and recreational experiences that help the public understand the meaning and relevance of park resources and foster development of a sense of stewardship by forging a connection between park resources, visitors, the community, and the national park system. NPS customers are the visitors using the programs, services, and facilities that NPS offers. NPS provided two sets of standards that we assessed—the “Visitors’ Bill of Rights” and visitor satisfaction survey descriptions. Neither of the standards included any key elements of effective customer service standards (see table 7). The first set of customer service standards, the “Visitors’ Bill of Rights,” is included in an internal training module which NPS officials stated is a standard to which they train employees.set of qualitative standards and includes descriptions of what NPS park visitors have a right to expect during their stay, such as “have their privacy and independence respected,” (see text box). NPS officials stated that this standard was developed in 1996 and has not been updated since then. According to NPS officials, there are no performance measures linked to the “Visitors’ Bill of Rights” as the standards were intended for internal training purposes. In addition, these standards are not publicly available. National Park Service’s Customer Service Standards for Visitor and Interpretive Services First set of standards— Visitors’ Bill of Rights: Visitors have the right to: have their privacy and independence respected; retain and express their own values; be treated with courtesy and consideration; and receive accurate and balanced information. Second set of standards—visitor satisfaction survey scorecard measure definitions: Visitor understanding level is at least 83 percent. Visitor satisfaction level overall is at least 90 percent. Visitor satisfaction with visitor services is at least 88 percent. Visitor satisfaction with park facilities is at least 83 percent. Ratio of number of interpretive contacts per visitor is at least 0.8. NPS also provided as standards its visitor satisfaction survey scorecard descriptions. The survey measures each park unit’s performance related to visitor satisfaction, visitor understanding, and appreciation. The survey is a random sample of visitors in 330 units. According to the fiscal year 2013 results of the visitor survey, approximately 97 percent of park visitors were satisfied overall with appropriate facilities, services, and recreational opportunities. According to NPS officials, this set of standards includes benchmark scores—standard and exceptional ratings—that the individual parks are rated against. While it is important for agencies to solicit a customer’s level of satisfaction for services provided, as is done by NPS, such feedback should be conducted in addition to having a set of predetermined customer service standards that Further, the include performance targets or goals that can be measured.visitor surveys are conducted after customers have received NPS services; one of the purposes of standards is to inform customers of what they can expect prior to receiving the services. Without clearly stated performance goals or measures directly linked to those goals, NPS is unable to determine the extent to which the standards are being met agency-wide or strategies to close performance gaps. Finally, these standards are not made easily publicly available. According to executive orders and guidance, standards were specifically intended to inform the public. As such, standards need to be identified as standards and made easily publicly available. However, we found the results of the visitor survey on the NPS website under the NPS Social Science Branch publications and were not identified as standards. As a result, customers may not easily be able to find the results of the surveys much less make the connection that the survey and its results reflect NPS’s standards for service. The Veterans Benefits Administration (VBA) disability compensation program provides monetary support to over 3.7 million veterans with disabling conditions that were incurred or aggravated during military service. The program also provides monthly payments to about 370,000 beneficiaries including surviving spouses, dependent children, and dependent parents in recognition of the economic loss caused by a veteran’s death during military service or, after discharge from military service, as a result of a service connected disability. The Veterans’ Group Life Insurance (VGLI) program, also within VBA, allows veterans to continue their life insurance coverage after separation from the military. VGLI serviced approximately 426,000 customers during 2013. VBA’s disability compensation and VGLI’s customer service standards each include two key elements, but neither are made publicly available (see table 8). VBA’s disability compensation standards include two key elements of effective customer service standards—goals for performance and performance measures; however the standards are not made easily publicly available (see table 8). The standards are quantitative, such as “increase compensation claims processing timeliness to 125 days and quality to 98 percent accuracy for medical issues” (see text box).According to VBA officials, standards are re-evaluated on an annual basis and adjusted as appropriate. For example, disability compensation standards were updated in fiscal year 2014 with a new standard added to increase the percentage of claims filed online for the disability compensation program. Veterans Benefits Administration Disability Compensation’s Customer Service Standards Increase compensation claims processing timeliness to 125 days and quality to 98 percent accuracy for medical issues. Increase the percentage of claims filed online. Increase the annual number of disability compensation claims received virtually/electronically from a baseline of 2 percent in 2013, to 12 percent in 2014, and 20 percent in 2015. Increase the National Call Center Satisfaction Index Score. Increase the number of registered eBenefits users to 3.8 million in 2014 and 5 million in 2015. VBA disability compensation has performance measures that are directly linked to customer service standards and, according to VBA officials, the results serve as performance indicators of its service contractor for disability compensation since 2010. The service contractor gathers the performance data and reports out to VBA management. For example, according to VBA officials, data collected for VBA call centers is reported by the service contractor to VBA management on a daily basis, with supplemental analysis reports provided monthly. These reports include overall satisfaction results, service attributes and diagnostic data. Other performance data, such as data collected on the online reporting site, can be accessed by VBA management on a daily basis for all satisfaction scores according to VBA officials. To identify any areas of opportunity, VBA officials said they hold monthly monitoring sessions to listen to customer satisfaction surveys, which are implemented over the phone. According to VBA officials, reported data are analyzed by both VBA and the service contractor to identify opportunities to make process improvements that may increase satisfaction results to meet the agency’s goals. When the performance results do not meet the standards, a plan is executed to drive up performance, according to VBA officials. Performance data are also used to quantify the various aspects of the delivery of benefits and services, to identify best practices that may exist within VBA lines of business and regional offices, and to recognize employees who provide outstanding customer service, according to VBA officials. Disability compensation standards are available on the Department of Veterans Affairs’ (VA) web-page through publicly available reports such as the Performance and Accountability Report (PAR) and are also identified in VA’s Strategic Plan. These documents serve a larger purpose, and while not excluding customers, are targeted to a much broader audience. However, these documents may not be readily understood, much less known, to most customers. As a result, although VBA disability compensation information is online, it is not easily available or accessible to its customers. Similarly, VGLI’s standards include performance goals allowing performance to be measured (see table 8 above). VGLI’s standards are quantitative in nature, such as “98 percent of e-mails responded to within 24 hours of receipt.” VGLI standards are reviewed on an annual basis, and adjusted as appropriate. For example, the most recent update of the standards was in 2013, when there was an adjustment of the standard for first call resolution from 78 percent to 82 percent. According to VGLI officials, this adjustment reflected process improvements and technological enhancements (see text box). Veterans Benefits Administration Veterans’ Group Life Insurance Customer Service Standard 80% of calls answered within 20 seconds. 97% of correspondence handled within 5 business days of receipt. 98% of e-mails responded to within 24 hours of receipt. 90% overall satisfaction with Office of Service members’ Group Life Insurance. 82% first call resolution rate. According to a VGLI official, VGLI’s service contractor reviews the performance standards annually and makes adjustments as appropriate based on industry standards, process improvements, and technological enhancements. For example, the service contractor makes adjustments to the numbers of employees working on specific tasks based on fluctuations in workflow throughout the year. In addition, the service contractor has trained its employees to be functional in several areas, and as a result employees can be temporarily reassigned based on work needs. For example, the service contractor stated that employees who primarily serve an administrative function are also trained to work in the call center if the center is experiencing higher than normal call volume. In addition, performance measures are directly linked to the standards and are also benchmarked against the insurance industry by the service contractor. Specifically, the service contractor measures performance and collects data, then reports out to VGLI officials on a monthly basis using a data metrics dashboard as well as quarterly briefings. However, VGLI’s standards are not made publicly available as the standards are used internally for its service contract, according to VGLI officials. As we previously stated, based on the executive orders and guidance, standards were specifically intended to inform the public and should be made publically available, regardless of VGLI’s use of the standards. Without such information, VGLI’s customers may not be fully aware of the services that are available. All of the agencies in our review provide customers with opportunities to submit feedback, including comments and complaints, through a variety of ways such as satisfaction surveys, comment cards submitted in person or online, e-mails, and call centers. However, not all of the agencies in our review had a formal or systematic mechanism for reviewing customer feedback (see table 9). Specifically, we found that CBP and VBA’s disability compensation have a formal mechanism in place to review customer feedback. Forest Service and NPS do not have criteria or guidance for when to elevate customer comments from the local level up to the agency level. FSA and VBA’s VGLI rely on service providers to review and elevate the customers’ comments at the service provider’s discretion. Executive Order 13571 stated that agencies should establish “mechanisms to solicit customer feedback on Government services” and that agencies use “such feedback regularly to make service improvements.” OMB guidance to agencies for implementing Executive Order 13571 further stated that agencies “ollect ongoing, timely, actionable customer feedback to identify early warning signals of customer service issues.” Although the executive order and guidance did not provide specific details as to how the agency feedback mechanisms should be developed, research on best practices emphasizes the need for a single, centralized management framework for receiving customer feedback so that all information about the customers can be linked together to facilitate a more complete knowledge of the customer. In addition, the Standards for Internal Control in the Federal Government calls for agencies to develop control activities, such as policies, procedures, techniques, and mechanisms that enforce management’s directives, which helps to reinforce the need for agencies to develop guidance or policies for reviewing and elevating customer feedback. Moving forward, a feedback mechanism that better aligns with OMB guidance, best practices, and Standards for Internal Control in the Federal Government could include guidance for reviewing customer feedback and taking action to resolve potential problems at the agency level from service providers and disparate locations across the country. In addition, such a feedback mechanism would also enable agencies to determine if customer concerns are localized, specific to a given function, agency-wide, or systemic. Customs and Border Protection. CBP has a formal mechanism in place to review customer feedback and is the only agency in our review that has a customer service web page that allows customers to file comments and complaints online and that includes information such as how CBP handles traveler complaints (see figure 2). According to CBP officials, the agency received approximately 8,700 complaints and 723 compliments in fiscal year 2013, a year in which CBP officers inspected over 362 million passengers, pedestrians, and crew. According to the same officials, for the past 3 years CBP’s INFO Center, where customers can call or email with questions or complaints, has tracked the history, timing, and resolution of a comment, and the specific port from where the comment originated. The center receives between 800 and 1,200 calls a day as well as 150 emails a day. According to CBP officials, 95 percent of the complaints are closed within 95 days or sooner. CBP officials said they routinely use customer comments to improve customer service. For example, CBP officials reported using customer feedback to identify a misconception among CBP officers at a specific port concerning the time limitations of a type of nonimmigrant visa for temporary stay. After identifying the problem through customer feedback, CBP officials reported that additional officer training was provided and the complaints on visa admittance at that specific port declined. Forest Service. According to Forest Service officials, most customer feedback was submitted, reviewed, and handled at the local unit level (forests). For example, in comments visitors reported dangerous steps along the Double Arch Trail in Kentucky’s Daniel Boone National Forest, and in 2013 Forest Service repaired those steps. According to Forest Service officials, local customer feedback is only elevated to the attention of headquarters staff at the discretion of local management and stated that local management can use the monthly meetings with regional management to discuss any issues or problems. However, Forest Service officials did not provide us criteria or documentation for situations when local management would elevate such feedback to headquarters for additional review or input. In addition, Forest Service officials stated that there is no mechanism for collecting comments agency-wide. As a result of this decentralized approach, Forest Service may be unable to determine the extent to which potential problems may exist agency-wide or identify needed improvements. Office of Federal Student Aid. According to FSA officials, customers primarily provide direct feedback online or on the phone through a contracted service provider, who is responsible for resolving any issues that may arise. Issues may be elevated to FSA at the contract service provider’s discretion but, according to FSA officials, FSA is not commonly involved. In addition, FSA officials did not provide any criteria or process by which the service provider would elevate customer feedback. FSA officials told us there are different service providers for the different phases of a loan. As a result, although each phase may be distinct, FSA may be unable to identify similar problems or concerns that occurred over time. Providing additional guidance, such as specific criteria of which customer comments or feedback should be elevated to FSA’s attention by the service provider, would better position FSA to not only provide better oversight of its service provider but to also potentially improve customer service. FSA did share an example of how it launched the Financial Aid Counseling Tool (FACT) in response to schools’ requests for an enhanced financial literacy tool that would be available to students year- round. According to FSA officials, FACT assists borrowers in making informed decisions about their loans and managing their debt by providing access to their real time loan balances, budgeting worksheets, and features to project income planning and to estimate monthly loan payments for various repayment options. However FSA was not able to provide further clarification or documentation as to why this specific case was raised to its attention over others and why subsequent action was taken. National Park Service. According to NPS officials, most customer feedback is submitted at the level of each park via visitor comment cards. Visitors may also write to regional and national NPS offices with their feedback, as well as submit formal complaints regarding accessibility and discrimination. There is a general “contact us” feedback option online. NPS officials told us that feedback submitted at local parks is reviewed at that level and only elevated at the discretion of local management. NPS headquarters officials were unable to provide examples of using customer feedback to improve service and instead reached out to local parks for information. For example, Golden Gate Park in California implemented a park-wide restroom improvement program, improved signage, and replaced a missing park map and brochures in certain areas based on customer feedback. According to NPS officials, NPS does not have agency-wide policies or processes to guide the review of feedback or to inform management about the nature and extent of the feedback to help improve operations and customer service. As a result, NPS may not be able to identify systemic or consistent problems across all of its units unless such information is elevated. Veterans Benefits Administration Disability Compensation. VBA’s mechanism for managing disability compensation customer feedback is largely managed by its service provider, according to VBA officials. The officials said most customer comments are submitted via call centers where there is a standard procedure for phone call escalations. Concerns are forwarded from the service provider to VBA based on agency guidance, and appropriate follow-up action is taken by the VA regional office. This process is managed and tracked through an internal system, according to VBA officials. As a result, although VBA’s disability compensation customer feedback is largely managed by service providers, VBA officials have a formal mechanism in place to oversee not only the feedback that has been escalated, but also the service provided to customers. In October 2010, VBA disability compensation launched a customer satisfaction research program for its national call centers and subsequently identified 97 service enhancements, 55 of which have been implemented, according to VBA officials. For example, VBA disability compensation implemented Virtual Hold Call Back technology, which allows callers to leave their name and phone number to receive an automatic return call. Since the implementation date of September 2011, the system has returned over 10 million calls, according to VBA officials. Veterans Benefits Administration–Veterans’ Group Life Insurance. According to a VGLI official, VGLI’s service provider reviews and handles customer feedback that may be submitted through VGLI’s portal–an on- line chat mechanism with customer service representatives. However, we were unable to find an online contact or complaint option on VGLI’s website. According to VGLI officials, customers also have the option to submit written feedback to VBA. VGLI holds quarterly meetings with its service provider to discuss service improvements that may be needed based on the feedback; however, VGLI officials said there were no criteria or process by which specific types of feedback were brought to their attention. As a result, VGLI officials may not be aware of potential system-wide issues affecting customers that may be reflected in their feedback. According to VGLI officials, feedback was used to identify the challenges veterans faced concerning the limits of and their desire for more life insurance coverage. As a result of the feedback, VGLI initiated “VGLI Buy Up” which allows veterans to increase their life insurance coverage once every 5 years by $25,000. In addition, proof of medical eligibility is not required. OMB has taken several steps to facilitate the improvement of agencies’ customer service initiatives. For example, in 2010 we recommended, and OMB later took steps to implement, that the Director of OMB should (1) direct agencies to consider options to make their customer service standards and results more readily available to customers and (2) collaborate with the President’s Management Advisory Board and agencies to provide citizens with the information necessary to hold government accountable for customer service, among other things. To implement these recommendations, OMB issued its customer service plan memorandum, as developed by OMB’s Deputy Director for Management, in June 2011 following the issuance of Executive Order 13571, which was issued to improve customer service and require agencies to develop and publish a customer service plan. The memorandum stated that OMB would establish and coordinate a Customer Service Task Force to “facilitate the exchange of best practices and the development of agency customer service plans and signature initiatives…that will meet regularly until agencies published their plans.” Task force actions. According to the task force schedule and agendas we obtained from OMB, the task force met with agency officials on a monthly basis from June to September 2011 before agency plans were posted on Performance.gov in October 2011. The task force was made up of senior agency officials responsible for the development of their agencies’ plans. According to an OMB official, senior agency officials identified best practices based on customer service experiences from within their own agencies. We wanted to review the outcome of these meetings including the self-identified best practices; however, according to OMB, no meeting minutes were taken and the staff involved in the task force are no longer with OMB. Department actions. Each of the departments included in our review published their plans as scheduled. However, of the five agencies in our review, only CBP officials told us that they used the plan as a tool to manage and oversee aspects of customer service. According to Executive Order 13571, the plans were to address how the departments would “provide services in a manner that seeks to streamline service delivery and improve the experience of its customers” and the OMB interpretive guidance stated that the plans should “identify implementation steps for the customer service activities outlined in EO 13571.” However, two of the agencies told us that unless required to do so by OMB, they do not intend to update their customer service plans. Further, Forest Service officials were not even aware that a department-wide customer service plan had been created. We have previously found that a well-developed and documented project plan encourages agency managers and stakeholders to systematically consider what is to be done, when and how it will be done, what skills will be needed, and how to gauge progress and results. However, we found that the plans were in effect, static documents and did not reflect any updates to milestones or actions taken. We discussed the usefulness of the customer service plans with OMB officials and they agreed that the departments could have implemented the plans more effectively. OMB stated that moving forward, the CAP goal implementation plan, which is discussed later, may better help provide additional focus on customer service government-wide. Performance Improvement Council role. We also inquired about the role of the Performance Improvement Council (PIC) in helping agencies with their customer service efforts. The PIC, however, did not have an active role in assisting agencies with the development of their plans. The PIC, chaired by OMB’s Deputy Director for Management and composed of performance improvement officers from various federal agencies, is charged with, among other responsibilities, facilitating the exchange of successful performance improvement practices among agencies, working to resolve government-wide or crosscutting performance issues, and assisting OMB in implementing certain GPRAMA requirements. Further, the PIC’s role includes considering the performance improvement experiences of customers of government services. Despite their designated role in improving customer experience, OMB officials confirmed that the PIC was not actively involved in the task force. However, moving forward, OMB stated that if plans on involving the PIC in the CAP goal implementation plan for customer service. OMB additional actions. Although OMB has taken action to move customer service forward government-wide, such as developing guidance and facilitating the task force, other steps were not completed because of budget limitations, according to OMB officials. These uncompleted actions pertain to OMB’s prior effort to provide oversight and accountability of agencies’ customer service metrics. In 2010, we found that OMB was developing a pilot dashboard that contained agency standards and some related measures, with links to agency web sites where customers could track their individual transaction status, where available. We found that OMB had asked agencies participating in the pilot to identify metrics that were drivers of customer satisfaction, such as wait time, processing time, and first call resolution. OMB expected the pilot dashboard to launch publicly in late fall 2010.official told us that although the agency had begun work on the performance dashboard, there was a reprioritization of resources within OMB and the pilot effort was discontinued. The OMB official did not know if the pilot efforts would begin again. Such a performance dashboard would have been the first of its kind government-wide and may have enabled OMB to provide greater oversight of agency customer service performance against identified standards. Nevertheless, OMB envisions that the attention given to customer service by making it a CAP goal will move customer service forward government-wide. In 2010, we found that in certain instances the Paperwork Reduction Act of 1995 (PRA) clearance process made obtaining customer input difficult because of lengthy delays to obtain approval for a survey to collect customer input (see text box). For example, we previously found that NPS stated that lengthy delays in obtaining approval for information collections, such as visitor surveys, under the PRA sometimes caused research to be postponed or even abandoned. We also found that Forest Service officials considered the time needed to obtain clearance for surveys to be a major barrier to gathering input from customers on their level of satisfaction. In early 2010, OMB also issued three memorandums—relevant to customer service goals—containing clarifying guidance to improve the implementation of the PRA. federal agencies to facilitate their understanding of PRA clearances and when and how they can be used. On June 15, 2011, OMB’s Office of Information and Regulatory Affairs (OIRA) issued a memorandum outlining the new Fast Track process for survey approval which would allow agencies to obtain timely feedback on service delivery using such types of voluntary collections as online surveys and comment cards or complaint forms. According to OIRA officials, the biggest challenge in implementing the Fast Track process has been disseminating information to agencies on when to use Fast Track. An OMB official told us that the agency is aware of the lack of communication concerning the Fast Track process and in the upcoming year plans to address this information gap. Office of Management and Budget, Information Collection under the Paperwork Reduction Act, Memorandum (Apr. 7, 2010); Office of Management and Budget, Social Media, Web-Based Interactive Technologies, and the Paperwork Reduction Act, Memorandum (Apr. 7, 2010); and Office of Management and Budget, Paperwork Reduction Act—Generic Clearances, Memorandum (May 28, 2010). Paperwork Reduction Act Before requiring or requesting information from the public, such as through customer satisfaction surveys, federal agencies are required by the Paperwork Reduction Act (PRA) to seek public comment as well as approval from the Office of Management and Budget (OMB) on the proposed collection of information. The PRA requires federal agencies to minimize the burden on the public resulting from their information collections, and to maximize the practical utility of the information collected. To comply with the PRA process, agencies must develop and review proposed collections to ensure that they meet the goals of the act. Once approved internally, agencies generally must publish a 60-day notice in the Federal Register soliciting public comment on the agency’s proposed collection, consider the public comments, submit the proposed collection to OMB and publish a second Federal Register notice inviting public comment to the agency and OMB. OMB may act on the agency’s request only after the 30-day comment period has closed. Under the PRA, OMB determines whether a proposed collection is necessary for the proper performance of the functions of the agency, including whether the information will have practical utility. The PRA gives OMB 60 days to approve or disapprove a proposed collection; however, OMB can also instruct the agency to make a substantive or material change to the proposed collection. OIRA officials provided us with the following data on the Fast Track process. As of May 2014, 85 agencies have been approved to use the Fast Track process for survey approval, including all of the agencies in our review. OIRA officials said they have approved 580 data collection requests using the Fast Track process, returned 67 requests to agencies because of improper submission, and 24 requests were withdrawn by the agencies themselves. Use of the new Fast Track process varied among the agencies we reviewed. For example, Forest Service officials told us that their local forest units had experienced a slight improvement in timeliness of their survey approvals because they used the Fast Track process. However, the Forest Service officials also told us that surveys primarily administered by headquarters serve a different purpose and rely heavily on statistical analysis for research. Such surveys, according to Forest Service officials, would not be eligible for the Fast Track process because of their statistical rigor. According to OIRA guidance, Fast Track is not intended to be used for surveys that require statistical rigor that will be used for making significant policy or resource allocation decisions, or for collections whose results are intended to be published. According to FSA officials, they have neither noticed an improvement or degradation in the approval process under the Fast Track process. FSA officials told us that OMB has routinely approved surveys that fall under the Fast Track process within the designated time frames, with few exceptions. VBA disability compensation officials also told us that their surveys were not eligible for the Fast Track process because their surveys did not meet all the requirements for Fast Track. For NPS, according to officials, the time involved with the procedural review through the Fast Track is still lengthy and is perceived as prohibitive for parks and programs to conduct valuable and usable social science research surveys. Finally, CBP officials told us that they have not used the Fast Track process and have no plans to do so in the near future. Officials at VBA’s VGLI did not have an opinion on the Fast Track process because they have not used it. OMB identified customer service as a cross-agency priority goal (CAP) goal in March 2014 to further build upon the progress being made by individual agencies (see figure 3). By focusing on developing standards for high impact services, OMB recognizes that government leaders have a responsibility to understand customer expectations and service needs, and continually evaluate and improve their effectiveness in meeting those needs. According to OMB officials, two goal leaders and a goal team are responsible for the CAP goal. In June 2014, the CAP goal team issued an implementation plan to increase customer satisfaction and promote positive experiences, issued an action plan to achieve that goal, and assigned a team to oversee and manage the project. The implementation plan, issued on the Performance.gov website, identified the development and implementation of standards, practices, and tools as one of the CAP goal team’s four strategies or sub-goals to further improve customer service. According to the implementation plan, the problem of fragmentation within and across agencies has made the establishment of customer service initiatives difficult. The CAP goal team’s strategy to address the problem of fragmentation will be to establish an infrastructure to improve coordination and to develop sustain change over time by developing a community of practice across agencies and clarifying who is responsible for customer service. According to the implementation plan, the community of practice will share best practices and develop guidance that the agencies will use to develop customer service standards. While it is too early to assess the effect of the new CAP goal, this new effort does offer an opportunity for OMB to begin to elevate the importance of customer service government-wide and to engage agencies on how to better meet customer needs. All five agencies established customer service standards. However, those standards did not always include performance targets or goals, did not always have performance measures, and were not always easily publicly available. Specifically, three of the five agencies in our review did not have all of the elements of a customer-centered performance management approach for delivering federal service. Having customer standards that include performance targets or goals allows customers to understand what to expect for the services they are seeking. Without standards customers may be left not knowing or confused over what to expect when using a government-provided service. We also found that not all agencies measured performance to determine whether customer service standards were being met. Measuring performance allows agencies to track the progress they are making toward meeting those standards and gives managers crucial information on which to base decisions as well as to update those standards, when necessary. Thus, if agencies do not measure performance to track progress toward meeting customer service standards then they risk failing to meet the needs of their customers. In addition, communicating customer service standards to the public in a way that is useful and readily available to customers is important in enabling the public to hold government accountable and to inform customer decision making. Four of five agencies we reviewed did not make customer service standards easily available to customers. For example one agency provided its standards through documents that serve larger purposes, such as departmental performance and accountability reports and agency strategic plans. While not excluding customers, those documents are targeted to a much broader audience. Notably, all five agencies in our review use customer feedback to improve customer service. Agencies reported they used this feedback in a number of instances to make improvements to training and the number of services offered among other things. However, only CBP and VBA’s disability compensation had a formal or systematic process for reviewing customer feedback. Having such a feedback mechanism could help agencies link information about their customers and ultimately assist agencies with customer service improvements. In March 2014, OMB made customer service a CAP goal. In June 2014 OMB released an implementation plan for customer service that included a goal statement to increase customer satisfaction and to promote positive experiences, released an action plan to achieve that goal, and assigned a team to oversee and manage the project. While it is too early to assess the effect of the new CAP goal, this new effort does offer an opportunity for OMB to begin to elevate the importance of customer service government-wide and to engage agencies on how to better meet customer needs. Recognizing that moving toward a more customer-oriented culture within federal agencies is likely to be a continuous effort, we recommend that the: Secretary of Agriculture direct the Under Secretary for Natural Resources and Environment to take the following four actions to improve Forest Service’s customer service standards and feedback review: ensure standards include performance targets or goals; ensure standards include performance measures; ensure standards are easily publicly available; and develop a feedback mechanism to collect comments agency-wide, which should include guidance or criteria to elevate customer feedback from local and regional offices to identify the need for and to make service improvements; Secretary of Education direct Federal Student Aid’s Chief Operating Officer to take the following two actions to improve Federal Student Aid’s customer service standards and feedback review: ensure standards are easily publicly available and develop a feedback mechanism that includes guidance or criteria for service providers to elevate customer feedback to identify the need for and to make service improvements; Commissioner of U.S. Customs and Border Protection take the following two actions to improve CBP’s customer service standards: ensure standards include performance targets or goals and ensure standards include performance measures; Secretary of the Interior direct the Assistant Secretary of Fish, Wildlife and Parks to take the following four actions to improve the National Park Service’s customer service standards and feedback review: ensure standards include performance targets or goals; ensure standards include performance measures; ensure standards are easily publicly available; and develop a feedback mechanism that includes guidance or criteria to review and elevate customer feedback from local and regional offices to identify the need for and to make service improvements; Secretary of Veterans Affairs direct the Veterans Benefits Administration to improve disability compensation customer service standards by making the standards easily publicly available; and Secretary of Veterans Affairs direct the Veterans Benefits Administration to take the following two actions to improve Veterans’ Group Life Insurance’s customer service standards and feedback review: ensure standards are easily publicly available and develop a feedback mechanism that includes guidance or criteria for service providers to elevate customer feedback and identify the need for and to make service improvements. We provided a draft of this report to the OMB and the Departments of Agriculture, Education, Homeland Security, Interior and Veterans Affairs. The Chief of the Forest Service (Department of Agriculture), the Chief Operating Officer at Federal Student Aid (Department of Education), the Department of Homeland Security, and the Department of Veterans Affairs provided written comments on a draft of the report, which are reprinted in appendixes IV, V, VI, and VII respectively. In their written responses, the Departments of Agriculture, Education, Homeland Security and Veterans Affairs agreed with our recommendations. Department of Interior officials also stated that they agreed with our recommendation in an e-mail. Finally, OMB, the Departments of Homeland Security, Education and Veterans Affairs also suggested technical changes to the report, which we incorporated where appropriate. We are sending copies of this report to the Director of OMB and the heads of the five agencies that were included in this review as well as interested congressional committees and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or mihmj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix VIII. The objectives of this study were to address the extent to which (1) selected agencies and their services are using customer service standards and measuring performance results against these standards, and how selected agencies are communicating standards and using customer feedback to improve customer service; and (2) the Office of Management and Budget (OMB) and the Performance Improvement Council are facilitating federal agencies’ use of tools and practices to improve customer service. To address our two objectives, we selected agencies and their services for our review based on prior work in which we surveyed 12 federal agencies (which are among those with the most widespread contact with the public) about 13 services they provided. For the prior report, five of those agencies were selected for additional follow-up interviews—based on their responses to key survey questions— in order to gain a fuller understanding of their responses. For this report, we selected the same five agencies and their services to determine the progress made by each since the issuance in 2011 of Executive Order 13571 on improving customer service (see table 10). To determine progress made, we expanded our review to include in-depth interviews with agency officials and an examination of relevant customer service documentation such as plans, performance measures, and feedback mechanisms. To address how selected agencies are using customer service standards and measuring performance against those standards, communicating standards, and using customer feedback to improve service, we reviewed our relevant prior work on customer service and the specific agencies in our sample. We requested and reviewed agencies’ customer service standards and available performance measures related to those standards. In addition, we compared agency information to relevant executive orders, presidential and OMB memorandums, and OMB guidance consistent with GPRA Modernization Act of 2010 (GPRAMA) provisions related to customer service (see table 11). The key elements we selected for assessing customer service standards include requirements found in GPRA, GPRAMA, executive orders, and OMB guidance and memorandums that focus on how customer service standards are to be used and measured including how standards should be communicated to customers. We conducted interviews with agency officials from various offices—such as performance and budget—as well as those directly involved in customer service. We did not evaluate the overall effectiveness of or level of customer service provided by any of the agencies reviewed as these issues were not within the scope of our engagement. We requested and reviewed agency information on customer service satisfaction surveys and feedback mechanisms, and departmental and agency strategic and customer service plans. In addition, we conducted Internet searches to determine the extent to which customer service information was made publicly available by the agencies. Specifically, we assessed the agencies on the contents of their standards and not against the level or quality of customer service they provide. In addition, we did not evaluate agency performance data or determine the reliability of such data as these issues were not within the scope of our engagement. To evaluate the extent to which OMB and the Performance Improvement Council (PIC) are facilitating federal agencies’ use of tools and practices to improve customer service, we reviewed OMB guidance and memorandums, customer service task force agendas, and other documents related to customer service and survey administration including the Fast Track process. We interviewed officials from OMB’s Office of Information and Regulatory Affairs and the Office of Performance and Personnel Management. We also reviewed information on customer service published on Performance.gov, a government-wide performance website. We conducted this performance audit in Washington, D.C., between November 2013 to October 2014 in accordance with generally accepted government auditing standards, which require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 1. *Visitors are not exposed to human waste. 2. *Water, wastewater, and sewage treatment systems meet federal, state, and local water quality regulations. 3. Garbage does not exceed the capacity of garbage containers. 4. Individual units and common areas are free of litter including domestic animal waste. 5. Facilities are free of graffiti. 6. Restrooms and garbage locations are free of objectionable odor. 7. Constructed features are clean. 1. *Effects from recreation use do not conflict with environmental laws (such as ESA, NHPA, Clean Water, TES, etc). 2. Recreation opportunities, site development, and site management are consistent with Recreation management system (ROS, SMS, BBM) objectives, development scale, and the Forest land management plan. 3. Landscape character and resource conditions at the recreation site are consistent with the Forest scenic integrity objectives and Forest Plan prescriptions. 4. Visitors and vehicles do not exceed site capacity. 1. *High-risk conditions do not exist in recreation sites. 2. *Utility inspections meet federal, state, and local requirements. 3. Laws, regulations and special orders are enforced. 4. Visitors are provided a sense of security. 1. *When signed as accessible, constructed features meet current accessibility guidelines. 2. Visitors feel welcome. 3. Information boards are posted in a user-friendly and professional manner. 4. Visitors are provided opportunities to communicate satisfactions (needs, expectations). 5. Visitor information facilities are staffed appropriately during seasons of use and current information is available. 6. Recreation site information is accurate and available from a variety of sources and outlets. 1. Constructed features are serviceable and in good repair throughout the designed service life. 2. Constructed features in disrepair due to lack of scheduled maintenance, or in non-compliance with safety codes (e.g. life safety, OSHA, environmental, etc.) or other regulatory requirements (ABA/ADA, etc.), or beyond the designed service life, are repaired, rehabilitated, replaced, or decommissioned. 3. New, altered, or expanded constructed features meet Forest Service design standards and are consistent with an approved site development plan, including an accessibility transition plan. Critical National Standards are identified with an asterisk (*). If not met, the resulting conditions pose a high probability of immediate or permanent loss to people or property. If they cannot be met, due to budget or other constraints, immediate action must be taken to correct or mitigate the problem. Immediate action may include closing to public use the site, trail, area, permit, or portions of the affected site, trail or area. If conditions, facilities, or services addressed by “non-critical” standards decline to the point where the health or safety of the visitor is threatened, then mitigating actions must be taken. Appendix III: Federal Student Aid Customer Service Standards for Student Loans under the Direct Loan Origination and Disbursement Program through the Common Origination and Disbursement System (COD) 1. Received unprocessed batches from schools will be reviewed each business day. The Contractor will review and resolve unprocessed batches within 3 business days from identification. 2. Availability of the COD Web site including all of the individual application and infrastructure components that result in availability of the application to the business excluding scheduled downtime, required processing outages and FSA provided technology service (e.g. telecommunications, networking, AIMS, and PM). 3. The number of days (30 days) required to fix commingled data incidents (Type 1 & Type 2) 4. Availability of the Total Access Ad Hoc functionality including all of the individual application and infrastructure components that result in availability of the application to the business excluding scheduled downtime and required processing outages. 5. Contractor shall provide bi-lingual (English and Spanish) phone support to schools, students, parents, and borrowers Monday - Friday from 8:00AM to 8:00PM Eastern Standard Time. All incoming calls shall be routed through the existing COD toll-free support number and routed to the appropriate Customer Service Representative with the purpose of responding to the caller issues. 6. The average amount of time a user spends on hold in the Interactive Voice Response system. The average speed of answer is measured from the time the user selects an option to speak with a customer service representative until a customer service representative answers the phone. 7. Of the total calls received, the percentage of calls in the Interactive Voice Response that are abandoned by the Customer before reaching the customer service representative. 8. The Contractor shall monitor and evaluate communications (telephone calls and emails) between Customer Service Representatives, Schools, Third Party Servicers, and Borrowers. The Contractor shall monitor and evaluate a random sampling of communications. The results of the evaluations will be collected and reported monthly. The purpose of the evaluations is to help confirm that the information provided to Schools, Third Party Servicers and Borrowers meets or exceeds the quality performance metric. 9. Deposit all funds received from schools, students, and third party servicers into established United States Treasury accounts. 10. The Contractor shall successfully restore COD Mainframe application data and core COD processing functionality (as defined in Section C.2.7.1) within the allotted time frame of each annual DR test. 11. Critical severity problems shall be resolved within 24 hours, or worked continuously until they are resolved. 12. The number of existing open problems at the time of release by priority are resolved within 30 days of the Release implementation date. This does not include problems that possess a workaround acceptable to Federal Student Aid. 13. The percentage of new problems introduced by a Service Pack implementation as measured within 30 days from the Service Pack implementation date. This will be determined by dividing the number of new problems detected after a Service Pack implementation that are associated with the Service Pack code modifications by the number of service tickets (problems and enhancements) the Service Pack attempted to resolve. 14. Publication of the COD Technical Reference prior to or on the mutually agreed upon publication date. Mutual agreement to change the publication date will reset the publication date for this SLA. 15. Publication of the COD Project Briefing based on a mutually agreed upon schedule excluding any Federal Holidays or closures. In addition to the above contact, Lisa Pearson (Assistant Director) and Dewi Djunaidy supervised this review and the development of the resulting report. Pat Norris, Diantha Garms, Tom Beall, Jehan Chase, Deirdre Duffy, Robert Robinson, and Scott Zellner made significant contributions to this report.
Providing customer service has been a long-standing challenge for federal agencies. GPRAMA requires that agencies establish a balanced set of performance indicators to be used in measuring progress toward performance goals, including customer service. This report is part of GAO's response to its mandate to evaluate the implementation of GPRAMA. It evaluates (1) the extent to which selected agencies and their services are using customer service standards and measuring performance results against these standards, and how selected agencies are communicating standards and using customer feedback to improve customer service; and (2) the extent to which OMB and the PIC are facilitating federal agencies' use of tools and practices to improve customer service. GAO selected five agencies and their services based on prior work in which it surveyed 12 federal agencies that are among those with the most widespread contact with the public. GAO reviewed and compared agency customer service documents to federal legislation and guidance, and interviewed agency officials about customer service. GAO reviewed the customer service standards at Customs and Border Protection (CBP), Forest Service, Federal Student Aid (FSA), the National Park Service (NPS), and two services in the Veterans Benefits Administration (VBA)—disability compensation and Veterans' Group Life Insurance (VGLI). GAO found that none of the agencies' standards included all of the key elements of customer service standards (see table). GAO identified key elements of effective customer service standards by reviewing the requirements of the GPRA Modernization Act of 2010 (GPRAMA) and executive orders that focused on providing greater accountability, oversight, and transparency. Without all of the key elements present, agencies may not be able to easily communicate performance targets or goals to customers, measure their progress towards meeting those goals, and pinpoint improvement opportunities. GAO found that all five agencies provide customers with opportunities to submit feedback, including comments and complaints. CBP and VBA's disability compensation had formal mechanisms for reviewing customer feedback, but the other agencies did not. For example, Forest Service and NPS do not have guidance for when to elevate customer comments from the local level up to the agency level. As a result, these agencies may not be effectively reviewing and addressing customer concerns across the agency. The Office of Management and Budget (OMB) has taken steps to facilitate the improvement of agencies' customer service initiatives. For example, OMB issued guidance to assist agencies in their implementation of Executive Order 13571, Streamlining Service Delivery and Improving Customer Service which was issued to strengthen customer service and require agencies to develop and publish a customer service plan. OMB formed a task force to assist agencies with the development of customer service plans. Moving forward, OMB has identified customer service as a cross-agency priority (CAP) goal in 2014 in an effort to elevate the importance of customer service by the federal government and intends to have the Performance Improvement Council (PIC) play a role in the CAP goal implementation planning for customer service. GAO recommends that the five agencies update their customer service standards and that Forest Service, NPS, FSA, and VBA's VGLI implement formal feedback mechanisms to improve customer service. CBP, Forest Service, FSA, NPS, and VBA all agreed with GAO's recommendations.
Asparagus is a perennial crop that has a relatively long life expectancy of up to 20 years in commercial plantings. Since the crop is not usually harvested for the first 3 years, asparagus production represents a significant long-term investment for growers. In addition, since the time from planting to the first harvest takes 3 years, producers cannot quickly increase production in response to market demand. While asparagus is a native of temperate regions, its cultivation is most successful in locations where either extreme temperature or drought stops the growth of the plant, providing it with a rest period. Asparagus is produced and sold either as fresh, uncooked whole spears or processed (heat-treated canned or frozen) whole spears or cut pieces. Asparagus is a labor-intensive, high-value vegetable crop. For example, according to the U.S. Department of Agriculture (USDA), in 2000, the season-average shipping-point price for fresh asparagus was $1.14 per pound. In comparison, the price for the second and third highest value vegetables—artichokes and fresh market snap beans—was $0.64 and $0.42 per pound, respectively. In 2000, the United States produced 227 million pounds of asparagus having a value of about $217 million. The majority of the asparagus produced was green asparagus for the fresh market—66 percent was fresh, while 34 percent was processed (about 28 percent was for canning and 6 percent for freezing). Figure 1 shows the annual quantity of domestic production from 1990 to 2000. As shown in figure 1, the production of fresh asparagus in the United States trended downward until 1995, when it reached a low in part due to poor weather in California. Since then, production has been increasing. In contrast, the production of asparagus for processing has been steadily declining. The major commercial asparagus-producing states are California, Washington, and Michigan. California, the most important state for fresh production, has a harvest season from January through May. While Washington and Michigan produce some asparagus for the fresh market, the majority of their production is for the processed market. Production from Michigan occurs from May through June and Washington from May through July. In recent years, Washington has begun shifting some production from asparagus for processing to fresh asparagus, although doing so is costly for producers. Thus, when the three states are considered, domestically produced fresh asparagus is available from January through July. At other times of the year, only canned and frozen production is available from domestic sources. In recent years, imports have accounted for a growing proportion of the U.S. fresh asparagus supply and, in 1999, represented 57 percent of fresh asparagus consumption. In 1999, over 90 percent of total U.S. asparagus imports were of fresh asparagus. The growth in imports has been made possible, in part, by the Andean Trade Preference Act and the North American Free Trade Agreement (NAFTA). ATPA, which was signed into law in December 1991, eliminates or reduces U.S. tariffs on eligible products from four Andean countries—Bolivia, Colombia, Ecuador, and Peru. ATPA’s primary goal is to promote broad- based economic development in these Andean countries and to develop viable economic alternatives to coca cultivation and cocaine production by offering Andean products broader access to the U.S. market. The President proclaimed preferential duty treatment for Peru in 1993. These preferences are scheduled to end effective December 4, 2001. NAFTA, which was ratified by the Congress in 1993 and implemented in January 1994, created a free trade area between Canada, Mexico, and the United States. NAFTA provides for the gradual elimination of tariffs—from as high as 25 percent on fresh asparagus—and other trade barriers on most goods, over a 10- to 15-year period. As shown in figure 2, asparagus imports were increasing prior to ATPA’s and NAFTA’s enactment and have continued to increase since that time. For example, imports grew from 44 million pounds in 1990 to 142 million pounds in 1999—an average annual rate of increase of 14 percent, whereby Mexico and Peru provided most of the increase. According to information from the Peruvian Asparagus Institute,increases in asparagus production, assisted by the implementation of ATPA, have resulted in making asparagus Peru’s second largest export crop, after coffee. Peru has also developed a modern frozen asparagus industry and has rapidly increased exports of this product to the United States and U.S. frozen export markets, such as Japan. Asparagus accounted for 14.1 percent of Peru’s agricultural exports and resulted in employment for over 20,000 Peruvians in 1999. According to the U.S. International Trade Commission’s (ITC) 1999 study, ATPA has displaced an estimated 2 to 8 percent of the total value of domestic fresh asparagus production from what it would have been without the act. U.S. consumers, however, benefited from the availability of fresh asparagus from Peru during the months when fresh asparagus is not generally available from domestic producers—August through December. In addition, changes in consumer preference contributed to a downward shift in the domestic demand for processed asparagus. Using 1999 data, ITC estimated that the total impact of ATPA’s tariff reductions has been a 2- to 8-percent displacement of the total value of U.S. fresh asparagus production by Peruvian imports as consumers substituted asparagus imported from Peru for domestically produced product. According to ITC, asparagus and cut flowers are the two industries experiencing potentially significant displacement under ATPA. ITC measured the impact of tariff reductions under ATPA by comparing estimated market conditions under full tariff treatment versus actual market conditions under duty-free entry. A decrease in the price of imported asparagus caused by tariff reductions results in the substitution of imported asparagus for domestically produced asparagus, but the displacement is not one for one because of various reasons, such as a retailer’s preference for marketing domestically produced product. Consumers have benefited from ATPA because fresh asparagus is now available during the months when it is generally unavailable from domestic producers. This increased availability, combined with consumers’ preference for fresh asparagus, has contributed to a downward shift in the consumption of processed asparagus. Figure 3 shows that the U.S. primarily produces and ships fresh asparagus during January through July. In contrast, imports from Peru occur nearly year-round, including months when U.S. fresh production is unavailable. As the figure shows, the majority of imports from Peru occur from August through December, when there is virtually no U.S. fresh production. Only canned or frozen asparagus is available from domestic sources during this time. Fresh asparagus from Peru is available, in part, because the elimination of tariffs reduced the price of Peruvian asparagus in the United States. While imports from Peru have increased the supply of fresh asparagus in the United States, demand has been strong, as demonstrated by the increased per capita consumption of fresh asparagus. As figure 4 shows, in the mid-1980s, the per capita consumption of asparagus shifted from processed to fresh asparagus, demonstrating consumers’ preference for the latter. This shift in consumer preference accelerated in the mid-1990s, as fresh asparagus became available on a year-round basis. The shift in the per capita consumption of asparagus is part of the general trend toward increased consumer preference for fresh vegetables. In addition, the consumption of asparagus, which is a high-value product, is particularly responsive to increases in personal income, according to econometric studies. In the latter half of the 1990s, real disposable personal income increased by an average annual rate of about 3 percent. The increase in fresh asparagus consumption has helped keep prices trending upward despite the increase in supply from imports. In contrast, shifts in preference and the declining consumption of processed asparagus have kept prices for processed asparagus relatively flat, as shown in figure 5. The decline in the consumption of processed asparagus particularly affects producers in Michigan and Washington, the two states that produce the majority of frozen and canned asparagus. For example, processed asparagus accounted for approximately 86 percent and 68 percent of the production of that crop in Michigan and Washington, respectively, in 2000. Our analysis shows that processed asparagus decreased from 42 percent of domestic production in 1990 to 34 percent in 2000. Most of the decline occurred in Washington. If ATPA is reauthorized, the producers of asparagus and, in particular, asparagus for processing will likely face some continued displacement from imports, but consumers can expect continued benefits from the year- round availability of fresh asparagus. However, some of this displacement will likely occur even if ATPA is not reauthorized and the normal tariff is imposed: 5 percent in 2 of the 5 months when the majority of Peru’s asparagus is imported, and 21.3 percent in the other 3 months. This is because U.S. consumers prefer fresh asparagus, which domestic producers cannot supply in some months, and because of Peru’s advantages in climate and labor costs. In addition, consumers would likely face decreased availability and pay higher prices than they would otherwise to the extent that the increase in tariff creates a reduction in imports from Peru and hence an overall reduction in asparagus supply. U.S. asparagus producers will also face increasing competition from Mexican imports under the North American Free Trade Agreement. In the longer term, the Free Trade Area of the Americas, currently being negotiated, could go beyond both NAFTA and ATPA by creating a duty- free trade zone in the Western Hemisphere for many products, including asparagus. If ATPA is reauthorized, U.S. asparagus producers, particularly of processed asparagus, will likely face some continued displacement from imports because the removal of tariffs on imports under ATPA allows fresh asparagus to be imported year-round. Since consumers tend to prefer fresh rather than processed asparagus when it is available, this displacement will likely continue. Consumers can expect continued benefits from this year-round availability of fresh asparagus. Peruvian asparagus enters the United States when domestic production is low, resulting in an increased supply of fresh asparagus in the marketplace. This extended product availability is believed to be partly responsible for increases in the consumption of fresh asparagus and declines in the consumption of processed asparagus. As shown in figure 6, the consumption of fresh asparagus reached 250 million pounds in 1999—representing a 103-million-pound, or 70-percent, increase since 1990. In contrast, the consumption of processed asparagus declined by 39 million pounds, or 37 percent, since 1990. Peruvian asparagus will likely remain a strong competitor for domestic producers even if ATPA is not reauthorized and the normal tariff is restored—5 percent in 2 of the 5 months when the majority of Peru’s fresh asparagus is imported and 21.3 percent in the other 3 months. This is because U.S. consumers have expressed a preference for fresh rather than processed asparagus when it is available in the marketplace. In addition, Peru’s climate allows for the year-round production and export of fresh asparagus. Peru also enjoys relatively lower labor costs for this labor- intensive crop. These advantages have allowed Peru to become the world’s second largest producer of asparagus over the past decade and have given Peru the potential for increasing exports in the future. In addition, Peruvian growers began a marketing promotion program in 2000 to stimulate U.S. consumers’ purchases of fresh asparagus. Without ATPA, consumers would likely have decreased year-round availability of fresh asparagus and pay higher prices to the extent that the increase in tariff creates a reduction in imports from Peru. Since fresh asparagus would not be readily available from other foreign producers, supplies would decrease, and consumer prices would likely rise. Regardless of what happens with the reauthorization of ATPA, U.S. asparagus producers will face increasing competition from other current and future trade agreements. In the near term, Mexico continues to be the most important source of imported fresh asparagus. Mexico’s advantage of lower transportation costs to U.S. markets is believed to offset any production advantages in ATPA countries. In addition, Mexico’s sizable shipments to the United States have occurred despite relatively high tariffs. As tariff rates under NAFTA are phased out through 2008, asparagus imports from Mexico will become even more competitive. Over the longer term, negotiations are under way to create a free trade zone among the 34 democracies of the Western Hemisphere. The Free Trade Area of the Americas could create a duty-free trade zone more extensive than both ATPA and NAFTA, which would result in the elimination of tariffs on many products, including asparagus, according to the U.S. Trade Representative. China, the world’s largest producer of asparagus, has been granted normal trade relations trading status by the United States, resulting in lower tariffs. As a result, China has begun increasing its exports of processed asparagus to the United States. U.S. trade law contains several provisions under which domestic industries may seek relief from injury caused by foreign imports.According to asparagus industry representatives, asparagus producers have not pursued relief under any of these provisions because the cost of bringing a case to ITC is considered too burdensome for such a small industry. Alternatively, industry representatives have proposed that the Andean Trade Preference Act be amended to remove duty-free treatment for asparagus when an ATPA country is deemed to be economically competitive with U.S. producers. Under section 201 of the Trade Act of 1974, domestic industries can petition ITC to investigate whether increased imports have caused them serious injury or threat of serious injury. Upon receiving a petition, ITC conducts an investigation to substantiate the allegation. ITC’s investigation is designed to determine whether a product is being imported into the United States in such increased quantities as to be a substantial cause of serious injury or threat of serious injury to the domestic industry. In making its determination, the Commission must consider all relevant economic factors, including whether (1) productive facilities in the industry have been significantly idled, (2) a significant number of firms have been unable to operate at a reasonable level of profit, and (3) significant unemployment or underemployment has occurred within the industry. ITC also considers, among other things, whether there is a decline in sales or market share; a higher and growing inventory of the product; and a downward trend in production, profits, wages, productivity, or employment in the industry. In addition, the Commission must consider imports from all sources. There is no requirement that the increases in imports or serious injury to a domestic industry be attributable to an unfair trade practice. If ITC makes an affirmative injury determination, it is required to recommend to the President an action that would be most effective in addressing the injury. Recommended actions may include increased tariffs, quotas, trade adjustment assistance to workers (such as job training), or a combination of these measures. As part of its recommendation, ITC must also state whether and to what extent its findings and recommendations apply to imports from ATPA countries. Following the receipt of ITC’s recommendations, the President may take one of several actions. These include taking (1) the action recommended by ITC, (2) other action deemed appropriate, or (3) no action. However, the President cannot take action that is solely in the form of suspension of duty-free treatment for ATPA imports unless the Commission’s investigation has found that the serious injury or threat of serious injury to the domestic industry resulted from the duty-free treatment. In any event, the President is required to report to the Congress what action, if any, he intends to make. If the President takes action that differs from ITC’s recommendation or takes no action, the Congress may enact a joint resolution, which directs that he proclaim the action recommended by ITC. The trade act also authorizes ITC to make preliminary determinations and recommendations to the President for provisional relief in two situations. Under the first situation, an industry producing a perishable agricultural commodity that has already petitioned ITC and is undergoing a section 201 investigation, may file a request with the U.S. Trade Representative for the monitoring of imports. The U.S. Trade Representative may then request that ITC monitor imports. If an ITC monitoring investigation has been under way for at least 90 days, then the industry producing the domestic product may request, in a section 201 petition with respect to imports of the monitored product, that a remedy be applied on a provisional basis, pending completion of a full section 201 investigation and presidential review. ITC would have 21 days to make a recommendation concerning provisional relief, and the President would have 7 days to make a decision. Any provisional relief granted by the President upon ITC’s recommendation would generally be in the form of increased tariffs. Under a second situation, an industry filing a section 201 petition may request provisional relief if it believes critical circumstances exist. Such circumstances exist when clear evidence shows that increased imports are a substantial cause of serious injury or threat of serious injury to the domestic industry and delay in taking action would cause damage that would be difficult to repair. ITC would have 60 days to make a critical circumstances determination and make a recommendation, and the President would have 30 days to decide what, if any, action to take. Such an action would generally be in the form of a tariff increase. In addition, ATPA specifically provides that an industry filing a section 201 petition with ITC can then also petition the Secretary of Agriculture for provisional relief. Under the ATPA special emergency relief provision, the Secretary of Agriculture and the President are authorized to make speedier determinations when an investigation of a perishable agricultural product under the trade act is ongoing. If the Secretary of Agriculture’s determination is affirmative, the President may temporarily withdraw the product’s duty-free treatment or take no action. No preexisting monitoring investigation by ITC is required. The Secretary and President have a total of 21 days to make their final determination. The emergency action would be rescinded upon a negative determination of ITC’s investigation, a presidential determination of changed circumstances, or the decision to take another relief action. To date, asparagus producers have not petitioned ITC for an investigation based on allegations of serious injury from imports under ATPA. According to industry representatives, the cost associated with preparing a case is burdensome, especially for such a small industry. Alternatively, industry representatives, in comments submitted to the U.S. Trade Representative on the operation of ATPA in 1997, have requested that the law be amended to remove duty-free treatment for asparagus when an ATPA country is deemed to be economically competitive with U.S. producers. Without a petition from the industry, the ITC has not initiated an investigation. We provided USDA’s Economic Research Service and Foreign Agricultural Service, staff from the U.S. International Trade Commission, and the U.S. Trade Representative with a draft copy of this report for their review and comment. We met with Economic Research Service agricultural economists, including the Team Leader for Fruit and Vegetable Analysis; ITC’s staff representing the Offices of External Relations, Economics, Industries, and General Counsel; and U.S. Trade Representative officials, including the Deputy Assistant U.S. Trade Representative for Latin America. They generally agreed with the substance of the report and provided technical and clarifying comments, which we incorporated as appropriate. In a letter commenting on the report, USDA’s Foreign Agricultural Service stated that the report does not adequately address the congressional rationale for providing duty-free access for asparagus imports under ATPA. The Foreign Agricultural Service stated that it does not believe that Peruvian asparagus production provides an alternative economic opportunity for coca producers and workers—the stated purpose for the trade act. Determining whether ATPA is meeting its intended purpose of providing alternative economic opportunities for coca producers and workers in the four Andean countries was beyond the scope of our review. However, our report does describe how asparagus production has contributed to economic development in Peru. The Foreign Agricultural Service also commented that the data we used in our draft report did not adequately reflect the current impact of Peruvian asparagus imports on the U.S. market. The 1999 quantity and value of domestic asparagus production data that we used to prepare our draft report were the most current available at the time of our review. Subsequently, in January 2001, USDA’s National Agricultural Statistics Service released its Vegetables 2000 Summary report. We updated our draft with the production information from that report. The updated information did not alter the results of our analyses. Appendix III presents the Foreign Agricultural Service’s comments on the report and our detailed response. We conducted our review from September 2000 through February 2001 in accordance with generally accepted government auditing standards. Appendix I discusses our scope and methodology. Copies of this report are being sent to interested congressional committees; the Honorable Steve Koplan, U.S. International Trade Commission; Ambassador Robert B. Zoellick, U.S. Trade Representative; the Honorable Ann Veneman, Secretary of Agriculture; and other interested parties. We will make copies available to others upon request. If you or your staff have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report were Robert C. Summers, Carol E. Bray, and John C. Smith. To determine the impact that the Andean Trade Preference Act (ATPA) has had on domestic asparagus producers and consumers and the likely impact of its reauthorization, we interviewed and obtained information from representatives from the federal government, asparagus producers’ associations, and research institutions. Specifically, we obtained and reviewed the annual reports prepared by the U.S. International Trade Commission (ITC) on ATPA’s impact on U.S. industries and consumers and interviewed ITC staff about the basis for their conclusions. We also obtained and reviewed the model used by ITC to analyze ATPA’s effect on the U.S. economy. We obtained and reviewed reports from the Office of the U.S. Trade Representative (USTR) on ATPA’s operation and interviewed officials concerning its impacts. We analyzed domestic and international asparagus production and marketing data provided by the U.S. Department of Agriculture’s Economic Research Service and Foreign Agricultural Service. In addition, we obtained production and marketing information from representatives of the California Asparagus Commission, Michigan Asparagus Advisory Board, Washington Asparagus Commission, and Peruvian Asparagus Institute. We reviewed studies on trade impacts from the University of California-Davis and obtained and reviewed two econometric models from Washington State University that investigated prices, production, and income in the U.S. asparagus industry. We adjusted prices in this report to 1999 dollars using the Gross Domestic Product implicit price deflator to more accurately compare prices and costs over time. Data on U.S. asparagus production and values are as of December 2000. All other data used in the report are as of December 1999, the most current available at the time of our review. To describe the trade remedies available to domestic industries adversely affected by imports under ATPA, we reviewed the applicable provisions of ATPA and other U.S. trade legislation, and interviewed officials from ITC and USTR. We also interviewed representatives of asparagus trade associations in California, Michigan, and Washington to determine their use of these remedies. We conducted our review from September 2000 through February 2001 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the letter from the U.S. Department of Agriculture’s Foreign Agricultural Service dated March 2, 2001. 1. We do not agree. Determining whether ATPA is meeting its intended purpose of providing alternative economic opportunities for coca producers and workers in the four Andean countries was beyond the scope of our review. However, our report does describe how asparagus production has contributed to economic development in Peru. 2. We disagree. The report provides information on both the fresh and processed sectors of the U.S. asparagus industry from 1990 to 2000. For example, figures 1, 4, 5, and 6 contain information on fresh and processed asparagus. 3. We disagree. As we reported in figure 5, the inflation-adjusted prices for fresh asparagus have trended upward from 1990 through 2000 while prices for processed asparagus remained relatively flat during this same period. 4. See comment 1. 5. ITC’s most recent study estimates that ATPA displaced an estimated 2 to 8 percent of the total value of domestic fresh asparagus production from what it would have been without the act. The 1999 data used for their study were the most current information available at the time of their analysis. 6. We agree. The scope of our work did not include evaluating the economic impact on domestic growing regions. 7. We disagree. The quantity and value of domestic asparagus production data for 1999 that we used to prepare our draft report were the most current available at the time of our review. Subsequently, in January 2001, the Department of Agricultures’s National Agricultural Statistics Service released its Vegetables 2000 Summary report. We updated our draft with the production information from that report. The updated information did not alter the results of our analyses.
U.S. asparagus imports increased in the 1990s and now comprise nearly one-half of the asparagus consumed in the United States. Peru is the second largest source of imported asparagus and benefits from duty-free treatment under the Andean Trade Preference Act (ATPA). ATPA is estimated to have displaced between two and eight percent of the value of domestic production from what it would have been without the act. Although the supply of fresh asparagus from imports has increased since ATPA's enactment, consumer demand has been strong, and prices have risen. In addition, an apparent increase in consumer preference for fresh asparagus has contributed to a downward shift in the domestic demand for processed asparagus. Most of the decline in the domestic production of processed asparagus occurred in Michigan and Washington, the two states that produce most canned and frozen asparagus. If ATPA is reauthorized, domestic producers of asparagus and, in particular, asparagus for processing, will likely face continued displacement, but consumers can expect continued benefits from the year-round availability of fresh asparagus. However, some of this displacement will likely occur even if ATPA is not reauthorized and the normal tariff is imposed. If ATPA is not reauthorized, consumers would likely have decreased availability and pay higher prices to the extent that tariff increases reduce Peruvian asparagus imports and hence total asparagus supplies. Domestic industries can petition the U.S. International Trade Commission to investigate whether increased imports under the ATPA have caused them serious injury or threat of serious injury. If the Commission finds serious injury, it may recommend relief options to the President, including suspending duty-free treatment for imports.
DOD uses a variety of aircraft to move weapons, equipment, and troops from the United States to and within theaters of operation. C-5s and C-17s are used for strategic airlift. They carry weapons and equipment too large for any other DOD aircraft from the United States to staging locations throughout the world. The family of C-130 aircraft, which includes the C-130E, C-130H, and C-130J aircraft, is then the primary asset used to move weapons, equipment, and troops within a theater of operation. The C-17 is dually capable of performing both strategic and tactical airlift missions and supplements the C-130 for tactical airlift. All of these aircraft are owned and operated by the Air Force and are considered part of the common user pool of aircraft that can be used to support any of the services’ missions. DOD also relies on the Air Force’s aerial refueling tankers (KC-10 and KC-135), commercial aircraft, and leased aircraft to supplement airlift capabilities. Officials at the U.S. Transportation Command and its Air Force component, the Air Mobility Command, decide on how best to use the assets on a daily basis. Often, these aircraft are scheduled for departure when they have a full load, to ensure assets are used cost-effectively. The services may also use their own airplanes and helicopters that are not in the common user pool to move people and cargo within a theater of operation. For example, these assets include the Army’s C-23 Sherpas and the Marine Corps’ V-22 Osprey aircraft. These aircraft are used to perform time-sensitive, mission-critical requirements and may take off without full loads since urgency is the primary driver for the mission, not efficiency or cost-effectiveness. The Air Force is in the process of modernizing its C-5 and C-130H aircraft and acquiring C-17s, C-130Js, and C-27Js to meet its future strategic and tactical airlift requirements and improve aircraft availability. It plans to retire C-130Es from the tactical airlift fleet by 2014. C-5s are being modernized in two phases. During the first phase, known as the Avionics Modernization Program (AMP), aircraft receive upgraded avionics capabilities and an all-weather flight control system. During the second phase, known as the Reliability Enhancement and Reengining Program (RERP), aircraft engines are replaced and electrical, fuel, and other subsystems are modified. Together, the two modifications will help improve the C-5s wartime mission capable rate. C-130H aircraft are also receiving an AMP modification and will undergo a center wing box replacement because of severe cracking discovered in that area. DOD periodically assesses global threats, the national military strategy, and its force structure to determine future airlift requirements and to judge the sufficiency of its acquisition and modernization plans. The analytical basis for DOD’s current airlift requirements is the Mobility Capabilities Study completed in December 2005. Officials used the study results to report in the 2006 Quadrennial Defense Review that 180 C-17s and 112 fully modernized C-5s (those that received the AMP and RERP modifications) would be sufficient to meet the national military strategy for strategic airlift with acceptable risk. This could change pending the completion of the ongoing DOD Mobility Capabilities and Requirements Study 2016, two other DOD-sponsored airlift studies conducted by IDA and RAND, the 2010 Quadrennial Defense Review, and potential changes in threat assumptions and the national security strategy. The IDA study has already been completed and the remaining studies are expected to be completed by January 2010. Over the last 2 years, DOD has restructured its airlift investments, primarily due to sharp cost increases for modernization programs and changes in requirements. The Air Force now intends to fully modernize less than one-half of the C-5s it originally planned and will procure additional C-17s. C-130 avionics modernization quantities were also cut more than half and the schedule was delayed due to cost increases. The Air Force is procuring more C-130J models than planned, due in part to a decision to retire the older C-130E model. Pending decisions on aircraft retirements, additional modifications, and new acquisitions could further affect future costs and the force structure. Furthermore, changing needs and uncertain strategies could lead to cost, schedule and performance variances on two new airlift programs, the C-27J and the JFTL. Appendix 1 contains more in-depth cost, schedule, and performance information on the department’s strategic and tactical airlifters that we reviewed. DOD has cut its C-5 modernization efforts by more than half and is acquiring additional C-17s. Significant cost increases on the C-5 RERP and AMP programs drove up unit costs and delayed schedules. These problems, along with additional congressional appropriations that DOD is using to procure more C-17s, led to a decision to fully modernize only 52 C-5 aircraft instead of the entire fleet. Congress has provided enough funding for DOD to procure 33 more C-17s. The last one will be delivered in March 2011. Substantial C-17 production line shutdown costs—ranging from about $465 million to about $1 billion by Air Force and Boeing estimates, respectively—have yet to be determined, but will need to be funded soon. Table 1 summarizes changes in cost and quantities from original estimates. DOD has nearly completed its C-17 acquisition program and is about midway through the C-5 AMP modernization program. According to program officials, 24 C-17s are yet to be delivered and 57 C-5s still need the AMP modification. The C-5 RERP modernization program is just beginning the low rate initial production phase. Only 3 of 52 C-5s have received the RERP modification as part of the development program. DOD has already spent about $69.2 billion on research, development, test and evaluation (RDT&E) and procurement funds on these programs and program officials project they will need to invest about $7.7 billion to complete the programs as currently planned (see table 2). The department planned to spend about $12 billion to make AMP and RERP modifications to the fleet of C-5 aircraft by 2020. However, the Air Force declared a Nunn-McCurdy cost breach in the RERP program in September 2007, due to increased labor and parts costs. The AMP effort required additional software development to address deficiencies found during developmental testing. Development costs would have been higher except that the Air Force decided not to address 250 deficiencies and 14 operational requirements in this program. These events resulted in revised plans to provide the AMP upgrade to all C-5 aircraft and the RERP modification to 52 aircraft. The combined cost for both modifications was reduced to $9.1 billion, but now less than one-half of the fleet will be fully modernized and at a much higher unit cost than originally estimated— $160.5 million for both modifications versus $96.1 million. The portion of the fleet that does not get both modifications will continue to experience mission capable rates of around 50 percent compared to about 75 percent for the portion that does get both modifications. The last modifications are supposed to be finished in 2015. Additional costs and changes in the force structure for the C-5 and C-17 are possible pending decisions on future modifications and retirements of older C-5s. For example, program officials said that many of the deficiencies and requirements dropped from the current C-5 AMP effort will be addressed in annual AMP software upgrades, the C-5 RERP, or a new block upgrade program that is scheduled to begin in fiscal year 2010. The C-17 is also addressing modernization through a series of aircraft upgrades designed to address emerging issues such as international airspace access requirements and critical operational/safety issues. Significant C-5 cost growth and further delays are possible if the RERP program is not adequately funded. We previously reported that, according to the department’s Cost Analysis Improvement Group, the RERP program was underfunded by about $294 million, with additional funding needed in fiscal years 2012 and 2013. Replacement engines are the costliest portion of the RERP upgrade, and DOD officials said if funding is insufficient to meet yearly production quantities in existing purchase agreements, anticipated price breaks will not occur and could likely result in another Nunn-McCurdy unit cost breach and program restructure. Department officials said the Air Force is committed to fully funding the RERP modification of 52 aircraft, but did not provide us with new budget data for fiscal years 2011 and beyond. Planned quantities of C-17s have fluctuated over the years. C-17 procurement began in 1988 and the Air Force initially planned to acquire 210 aircraft. Following a major acquisition review in 1990, the program was reduced to 120 aircraft because of technical problems and funding shortfalls during the full-scale development program, which resulted in higher than expected cost increases and schedule delays. In subsequent years, DOD expanded the program from 120 aircraft to 180 aircraft and, in the past 3 fiscal years, Congress has provided funding that would allow DOD to procure 33 additional aircraft: 10 in fiscal year 2007, 15 in fiscal year 2008, and 8 in fiscal year 2009. This would bring the total number of C-17s DOD plans to procure to 213. As of July 2009, DOD had taken delivery of 190 aircraft. The program is expected to end with the delivery of the 213th aircraft in March 2011, at which time the production line could be shut down if Boeing does not receive additional international orders for the aircraft. The Air Force’s fiscal year 2010 budget includes $91.4 million to fund some of the shutdown costs and a DOD official stated that additional funding would be included in future budgets. However, final shutdown costs have not been negotiated between the Air Force and Boeing, the prime contractor. Last year we reported that the Air Force estimated the costs to be around $465 million and Boeing’s estimate was about $1 billion. DOD’s tactical airlift investments have also experienced cost and schedule fluctuations and continue to experience significant uncertainty. The AMP program to modernize the C-130H fleet has been substantially reduced, although officials are examining a possible follow-on effort to include more aircraft. Procurement quantities for the C-130J have increased to replace retiring C-130E models and plans, quantities, and employment strategies for the newest tactical aircraft, the C-27J, have yet to be finalized following a decision to transfer the joint program entirely to the Air Force. Table 3 summarizes changes in cost and quantity for current tactical aircraft. The JFTL, expected to augment the C-130 fleets, is in concept development and cost and quantity estimates are unavailable. DOD has not yet begun its C-130 AMP production program and has only taken delivery of 2 C-27Js as of July 2009. The department has more than half of its C-130J acquisitions—95 aircraft—yet to procure. DOD has already spent $10.1 billion in RDT&E and procurement funds on these programs. Program officials project it will cost about $12.6 billion to complete the programs as currently planned (see table 4). The C-130 AMP entered system development in 2001, but funding instability and problems integrating hardware and software, as well as an Air Force decision to exclude C-130E aircraft from the program, triggered a Nunn-McCurdy unit cost breach in February 2007. The program was subsequently restructured to include far fewer aircraft—221 instead of 519—at a cost $1.8 billion greater than the original program estimate. In spite of the restructuring, incomplete production decision documentation and software integration problems, as well as senior leadership concerns about the acquisition strategy, have delayed a low-rate production decision by more than a year from the revised baseline—a slip of more than 4 years from the initial estimate. As of July 2009, the program was still awaiting approval from the Under Secretary for Acquisition, Technology and Logistics to award a production contract. The Air Force is considering another program restructure as well as a follow-on effort to modernize avionics on additional C-130 aircraft, but officials did not provide us an estimate of costs and quantities. The department is now procuring more C-130J aircraft than originally expected, in part because of a decision to retire C-130Es. Production quantities for J-model aircraft have grown significantly over the last several years, from an initial baseline of 11 aircraft in 1996 to a current estimate of 168 aircraft, but according to program estimates, program unit costs have remained relatively stable. Program officials estimate a total program cost of $15 billion. As of July 2009, 73 C-130Js have been delivered. Recently, the department took delivery of the first two C-27J airlifters as part of the Joint Cargo Aircraft program to provide direct support for Army time-sensitive, mission-critical troop resupply. In June of 2007, the Under Secretary for Acquisition, Technology and Logistics approved an acquisition program baseline for the joint program of 78 aircraft, with the Army planning to buy 54 aircraft, and the Air Force 24. However, as part of its fiscal year 2010 budget request, the department transferred the program, along with the resupply mission it supports, exclusively to the Air Force and reduced the program from 78 to 38 aircraft. Air Force operational plans for the fleet and employment concepts for meeting Army direct support requirements have not been finalized. The Army and the Air Force are jointly pursuing the JFTL to replace the C-130H airlifter and augment the rest of the C-130 fleet. The joint concept development effort was initiated in January 2008 following a decision by the Army and Air Force Chiefs of Staff to merge requirements for separate heavy lift efforts in progress at the time. The JFTL is anticipated to have a payload capacity of up to 36 tons, with a combat mission radius of 500 nautical miles. However, the services have different concepts for the aircraft. The Army concept is for a vertical take-off and landing tiltrotor that could provide sustainment of forces at the point of need and enable the maneuver of a mounted force (i.e., forces deployed with combat vehicles) by air. The Air Force is pursuing a fixed wing concept that would address the need to operate on short, soft, or rough airfields and the need for greater speed. Officials from both services stated they would like to have the JFTL initial capabilities document validated and begin work on an analysis of alternatives in the late summer of 2009, to help ensure a sufficient basis for budget deliberations in March 2010. As of July 2009, this had not occurred. Documents provided by these officials indicate that system development for whichever concept is selected is not expected to begin until at least 2014, with the new system to be fielded beginning around 2024. Additional funds provided by Congress for C-17 procurement more than offset the strategic airlift gaps associated with reduced C-5 modernization plans. However, there is a potential future gap in tactical airlift capabilities for transporting medium weight Army equipment that cannot fit on C-130 aircraft. The C-17 fleet, in its dual role of providing both strategic and tactical airlift, currently provides this capability and is anticipated to continue to do so for many years. The JFTL is envisioned to eventually replace the C-130H and perform this and other roles, but will not be available for 15 years or more under the current acquisition strategy. While the various mobility studies acknowledge the C-17s’ significant dual role, they did not comprehensively evaluate an expanded future use of the C-17 for the transport of medium weight equipment and how this could affect the force structure, the C-17s’ service life, and when to shut down the C-17 production line. For example, the studies do not quantify current and anticipated future use of the C-17 for tactical airlift. This is because DOD officials do not consider the C-17 to be a suitable substitute for the JFTL. In addition, there are differing opinions about the transport of small loads in direct support of Army units, which could call into question the quantity of C-27Js needed to perform the Army mission. Two studies reached somewhat different conclusions about the cost effectiveness of using C- 130Js and C-27Js for this mission. The Air Force and Army are working on a plan for how the Air Force will meet Army direct support requirements, but the details have not been finalized. DOD’s recently established portfolio management structure is supposed to provide a useful forum to address the broad range of airlift investment decisions. However, efforts so far have been primarily focused on new programs rather than addressing gaps and redundancies across the current portfolio, as well as making other airlift decisions, such as when and how many C-5s to retire or the appropriate mix of C-130s and C-27Js needed to perform Army missions. Following DOD’s decision to reduce the number of C-5s that will be fully modernized from 111 to 52 aircraft, Congress has appropriated around $5.5 billion that DOD plans to use to procure up to 23 additional C-17s. This would bring the total number of C-17s the Air Force now plans to acquire to 213 aircraft. DOD and Air Force officials believe this current quantity of C-17s more than adequately addresses their strategic airlift requirements in terms of the number of aircraft needed and the collective delivery capabilities. Table 5 shows the changes in the strategic airlift mix since the time the 2005 Mobility Capabilities Study was completed and the impact the different mixes have had on DOD’s ability to meet strategic airlift requirements for the timely inter-theater transport of required equipment and supplies. A recent IDA study concluded that 316 strategic airlifters, which include 205 C-17s, 52 fully modernized C-5s, and 59 partially modernized C-5s, meets DOD’s strategic airlift requirements established in the 2005 Mobility Capability Study. Further, if additional airlift capacity is needed above what the current mix of aircraft can deliver, it could be achieved without procuring additional C-17s or modernizing C-5s. Specifically, IDA found that additional capacity could be obtained by using C-5s at Emergency Wartime Planning levels transporting some small oversize as well as bulk cargo using Civil Reserve Air Fleet aircraft making use of host nation airlifters to the maximum extent possible and using tankers not involved in tanker missions to carry cargo in theater. In the event that even more capacity is needed, the IDA study states that it would be more cost-effective to provide the RERP modification to more C-5s than to procure additional C-17s because the near-term acquisition costs are offset by reduced operation and support costs. IDA also concluded that retiring older C-5As to free up funds to buy and operate more C-17s would result in a less capable force at comparable overall cost and thus would not be cost-effective. A potential future capability gap exists in the deployment and redeployment of Army medium weight weapon systems within a theater of combat. The C-17 is the only aircraft currently capable of transporting heavier equipment, such as combat configured armored Strykers and Mine Resistant Ambush Protected vehicles, within a theater of operations as these are too large and bulky for C-130s to carry. However, the C-17 cannot transport this equipment into austere, short, or unimproved landing areas. DOD’s long-term plan is to use the JFTL, the planned C-130H replacement, to transport these vehicles in theater, including to such access-challenged locations. However, it will not be available for at least 15 years as currently planned. While the various mobility studies acknowledge the C-17 can perform both strategic and tactical airlift missions, none of the three recently completed or ongoing studies comprehensively considered the C-17 in the tactical force structure, even though about 20 percent of the tactical sorties flown by the C-17 fleet in fiscal year 2007 were for missions where loads were too large for C-130s. As such, DOD has not evaluated the impact the increasing tactical heavy lift mission will have on future tactical airlift requirements, the C-17’s service life, its availability to perform strategic airlift and other tactical airlift missions, and the impact it could have on C-17 production shutdown plans. DOD officials do not believe that the C-17 is a suitable substitute for the JFTL mission. A DOD official stated that preliminary results of the Mobility Capabilities and Requirements Study 2016 show that in the worst case planning scenario there would be enough C-17s to perform its primary role as a strategic airlifter, as well as some tactical missions through 2016. This is because the study analysis shows the peak demand for the C-17 and the C-130 occurs at different times and the C-17 is aging as planned. However, officials indicated that none of the current mobility studies analyzed the need for the C-17 to perform additional tactical heavy lift missions for the 8-year period between 2016 and 2024, when the JFTL is expected to be fielded. Furthermore, because we were not granted access to the preliminary study information, we could not ascertain the extent to which the C-17’s heavy lift mission had been considered in DOD’s analysis through 2016. C-17 production is scheduled to end in March 2011. As we previously reported a well-reasoned, near-term decision on the final C-17 fleet size could help DOD avoid substantial future costs from ending production prematurely and later restarting production. For example, the Air Force has estimated that restoring the production line could cost $2 billion. Costs and challenges associated with such a course include hiring and training a workforce of nearly 3,100 people, reinstalling and restoring production tooling, and identifying suppliers and qualifying their parts and processes. Although it is too early to comment on JFTL program outcomes, we believe DOD officials will need to exercise caution to avoid pitfalls we have previously identified in connection with developing new weapon systems so the new system will be delivered on time and within cost estimates. These pitfalls include taking a revolutionary versus an evolutionary approach for weapon system development, over promising performance capabilities; increasing requirements; and understating expected costs, schedules, and risks associated with developing and producing the weapon. DOD understands many of the problems that affect acquisition programs and has revised its acquisition policy as a foundation for establishing sound, knowledge-based business cases for individual acquisition programs. For example, the policy recommends the completion of key systems engineering activities before the start of development, including a requirement for early prototyping, and establishes review boards to evaluate the effect of potential requirements changes on ongoing programs. The policy also supports evolutionary acquisitions and states that increments should be fully funded, include mature technologies, and normally be developed in less than 5 years. However, to improve outcomes, DOD must ensure that its policy changes are consistently implemented and reflected in decisions on individual programs. Both Air Force and Army science and technology officials indicated that no new technology invention is needed for either of their concepts. However, tiltrotor technology has never been applied to a system of the size needed to carry all the Army’s ground vehicles (excluding the M-1 tank). In fact, the Army envisions the JFTL’s payload capacity will be nearly 5 times that of the V-22, the world’s first production tiltrotor aircraft and nearly 3 times that of the CH-47 Chinook, a heavy lift helicopter used to transport ground forces, supplies, and other critical cargo. In addition, the Senate Armed Services Committee recently noted that to support the JFTL initial operational capability, a prototype would need to be flying by 2015. Yet, the committee could not identify any DOD funds budgeted for accomplishing this objective, and further observed that waiting to conduct a competitive prototyping effort as part of an acquisition program would take years to begin. As such, the committee requested the Under Secretary for Acquisition, Technology and Logistics to, among other things, assess the merits of initiating a low-cost, highly streamlined competitive prototyping effort immediately, determine whether cost and performance goals can be met, help define requirements, and sustain the industrial base. Questions remain about the number of C-130s and C-27Js needed to support Army direct support missions. As stated earlier, as part of its fiscal year 2010 budget request, the department transferred the C-27J program, along with the resupply mission it supports, exclusively to the Air Force and reduced the program from 78 to 38 aircraft. In a recent hearing, congressional leaders questioned the Secretary of Defense about how the Air Force will fulfill this mission with fewer aircraft than initially anticipated. In response, the Secretary of Defense stated that the reduced number of C-27Js was based on the number needed to recapitalize the Army’s fleet of C-23 Sherpas and that uncommitted C-130 aircraft can be used to complement the C-27Js to fulfill the Army’s mission. In addition, he said there needs to be a change in the Air Force’s culture with respect to how the direct support mission is accomplished. The Air Force and Army are in the process of developing plans on how the Air Force intends to fulfill the direct support mission, which would include important decisions on employment concepts, basing, and life-cycle support. The plans are in various stages of development and are expected to be completed by October 2012. However, congressional concerns remain regarding the service’s commitment to that mission. This concern is based on historical instances in which the Air Force assigned lesser priority to direct delivery missions compared with traditional airlift operations, most notably during the Vietnam War when the Air Force assumed ownership of the Army’s C-7 Caribou aircraft and subsequently dropped some missions. It is also unclear what effect this program change will have on the Air Force’s C-130 fleet operations. In recent studies, IDA and RAND assessed the use and roles of the C-130s and C-27Js in performing tactical missions. Although the study parameters were different, they both looked at the tactical movement of cargo. IDA’s analysis focused on the use of these aircraft within the context of major combat operations as well as persistent global involvement in numerous smaller operations. IDA found that the tactical fleets they examined were equally cost-effective at transporting cargo in major combat operations. Whereas C-130s are more cost effective than the C-27Js in specific missions that demand full loads, the opposite is true when missions require small loads. Further, in non-major combat operations, IDA found that the global demand for small loads on numbers of aircraft in different locations made additional C-27Js more cost-effective than additional C- 130s. According to RAND officials, RAND work on this topic has been underway for several years. The first RAND study focused on determining the most cost-effective way to recapitalize the C-130 fleet in order to meet the official wartime requirement. This study concluded that acquisition of the extended version of the C-130J was the most cost effective option to perform tactical missions defined in the officially approved wartime requirement. The C-27J provides about 40 percent of the cargo capacity (in terms of pallets) as the extended C-130J at about two-thirds the cost, based on net present value total life cycle costs. In addition, the study also concluded that the extended C-130J and the C-27J were equally cost- effective at conducting the ongoing resupply missions in Iraq and Afghanistan. RAND was then asked to consider the cost-effectiveness of the C-27J in eight additional missions that were not part of the official requirement. The study concluded that the C-27J was not cost-effective or appropriate for five of those missions and was comparable to the C-130J in three of the missions. RAND also found that the C-130J and the C-27J have comparable performance under operationally consistent circumstances of delivering the same amount of cargo at the same distances. It should be noted that neither of these studies addressed recent C-27J program decisions that resulted in the transfer of the program to the Air Force and a reduction in aircraft quantities. Likewise, neither of the studies considered the number of C-130s that may be necessary to supplement these missions or the impact the missions may have on the C-130 fleet. Furthermore, because the C-27J was not initially considered part of the common user pool, the ongoing DOD Mobility Capabilities and Requirements Study 2016 did not include the C-27J in the common user pool in its analysis. Following the restructuring, an Air Force official told us that, while the C-27J’s primary use is expected to be for direct support of the Army, it would also be available for movement of cargo in the common user pool. In September 2008, the department instituted a new process for helping senior leaders make investment decisions, including those for airlift. Known as capability portfolio management, the new process enables the department to develop and manage capabilities, as opposed to simply individual programs, and enhance the integration and interoperability within and across sets of capabilities. Previously we reported that leading commercial companies use portfolio management to collectively address all of their respective investments from an enterprise level rather than as independent and unrelated initiatives. This approach, among other things, allows the companies to weigh the relative costs, benefits, and risks of potential new products and helps the companies balance near- and future-term market opportunities. According to DOD officials, airlift issues fall under the purview of the logistics portfolio and are included in the deployment and distribution subgroup, along with sealift and ground transportation. Figure 1 shows the major capability areas included in the logistics portfolio. The new capability portfolio management directive states that DOD shall use capability portfolio management to advise the Deputy Secretary of Defense and the heads of DOD components on how to optimize capability investments across the defense enterprise and minimize risk in meeting the department’s capability need in support of strategy. The Under Secretary of Defense for Acquisition, Technology and Logistics and the Commander of U.S. Transportation Command share responsibilities for managing the logistics portfolio. They are expected to identify airlift issues, priorities, and capability resources and mismatches (gaps, shortfalls, and redundancies). According to officials that assist with logistics capability portfolio management activities, logistics portfolio managers now have access to the Deputy’s Advisory Working Group that they may not have had access to before to discuss unresolved logistics issues, including the Deputy Secretary of Defense and the Vice Chairman of the Joint Chiefs of Staff. We believe portfolio management offers DOD an opportunity to address the full range of airlift issues, but DOD’s implementation thus far has not had a big impact on the way airlift assets are managed. Officials we spoke with stated that the Under Secretary of Defense for Acquisition, Technology and Logistics and U.S. Transportation Command continue to focus on activities they were already performing prior to the establishment of the portfolio, mainly concerned with new weapon system programs and future capabilities but not as much on modification programs on legacy aircraft. For example, the U.S. Transportation Command has been and continues to be responsible for developing an integrated priorities list that details the top new capabilities needed and identifying capability gaps and shortfalls for airlift. The Under Secretary of Defense for Acquisition, Technology and Logistics continues to play an advisory role for addressing these gaps and shortfalls. Officials stated that to date, the logistics portfolio managers have not provided input to recent or upcoming airlift decisions related to the appropriate mix of strategic and tactical airlifters, changes in modernization programs, C-5 retirements, C-17 production shutdown, and changes in the Air Force’s roles and missions for airlift. In addition, no airlift issues have been brought to the working group for resolution. Given this approach, we believe the department is still at risk of continuing to develop and acquire new airlift systems and modernization programs without knowing whether adequate resources are available to complete programs within cost and schedule estimates. Growing fiscal pressures are forcing DOD leaders to look closely at weapon system investments. DOD has to make tough investment and programmatic decisions regarding strategic and tactical airlift in the near future. However, the path forward is not clear because recently completed and ongoing mobility studies lack some crucial information that would help department officials make sound airlift investment decisions. Namely, the studies do not quantitatively account for the increasing tactical role of the C-17, especially in light of the fact that C-130s are not capable of delivering heavier equipment demanded by our warfighters and that the JFTL, which is envisioned to perform this mission, will not be available for 15 years. Further, the studies do not explore the possible use of C-27Js in a common user role or the impact on the fleet and number of C-130s needed to support Army time-sensitive, mission-critical requirements. While Congress and DOD appear to have addressed the strategic airlift capability gap, some fundamental questions remain: Can the Air Force adequately fund the C-5 RERP modification program over the next 5 years? When should C-5s be retired and how many? And how many C-5s would need the AMP modification if some of the aircraft are retired? Even larger questions exist for tactical airlift: Are 213 C-17s enough to perform both strategic and tactical missions? What are the potential impacts on C- 17 service life, maintenance, and availability from its expected increased use in the future for the tactical airlift of heavier and bulkier Army equipment? How will the Air Force meet the Army’s time-sensitive mission-critical requirements with 40 fewer C-27J aircraft? Will there be a fundamental shift in the Air Force’s roles and mission that would require the Air Force to assume more Army-specific missions? Can the department set technically realistic requirements for the JFTL and follow an evolutionary acquisition strategy that includes selecting mature technologies, normally developing increments in less than 5 years and fully funding each increment? More information is needed to help the department address these questions and avoid the unnecessary expenditure of billions of dollars on redundant capabilities or a potentially premature C-17 production line shutdown. The airlift portfolio management team has the requisite authority to address these questions and influence budget decisions, but greater attention must be paid to all facets of the airlift life cycle—from cradle to grave. Making sound modernization and retirement decisions is just as important as deciding when and what type of new programs to start. Moreover, approaching these decisions from a portfolio perspective rather than on a weapon system by weapon system basis and considering new roles and missions for the Air Force may help the department strike the right balance for its airlift investments. We are making five recommendations to help improve DOD’s management of strategic and tactical airlift assets. We recommend that the Secretary of Defense direct the portfolio management team, consisting of U.S. Transportation Command and the Under Secretary of Defense for Acquisition, Technology and Logistics, to provide more comprehensive advice to senior leaders on the full range of airlift investment decisions, including new program starts, modernization efforts, and retirement decisions. This would also include identifying alternatives for using existing common user aircraft to meet service-specific missions and considering new roles and missions for the Air Force; the Office of the Secretary of Defense (Cost Assessment and Program Evaluation) and Commander, U.S. Transportation Command to develop a specific airlift plan that would identify when C-5s will be retired and identify the total number of additional C-17s, if any, that would be needed to replace C-5s or perform tactical heavy lift missions until the time the JFTL is fielded; the Commander, Air Mobility Command, to determine the appropriate mix of C-27Js and C-130s that are needed to meet Army time-sensitive, mission- critical requirements and common user pool requirements; the Air Force and Army to reach agreement on plans detailing how Army time-sensitive, mission-critical requirements will be addressed and prioritized against other Air Force priorities; and the joint Air Force and Army program office to develop a plan to follow an evolutionary approach for developing the JFTL based on DOD acquisition policy that includes selecting mature technologies, normally developing increments in less than 5 years, and fully funding each increment. DOD provided us with written comments on a draft of this report; these are included in appendix II. DOD partially concurred with all five recommendations, stating that it either has plans and processes in place or ongoing efforts to address our concerns. During the course of our review, DOD officials explained the steps they were taking to make strategic and tactical airlift decisions, but in some cases did not provide us with supporting documentation and, in other cases, the plans were in the initial stages of development and there was not yet sufficient detail for us to determine the extent to which they addressed our concerns. Despite the positive actions DOD described, we believe that the department’s efforts in some cases still fall short and that our recommendations are warranted to help guide subsequent actions and transition plans to effective implementation. DOD officials also provided technical comments on our draft and we revised our report where appropriate. In response to our first recommendation about the portfolio management team providing more comprehensive advice to senior leaders on the full range of investment decisions, DOD says it has a structured process in place for assessing its mobility capabilities and requirements that includes strategic and tactical airlift decisions. We understand that DOD has a process in place to make airlift decisions, but they are not being made from a comprehensive portfolio management perspective, per DOD regulation. DOD officials could not provide us with any evidence that the portfolio management team had even discussed airlift issues from a portfolio perspective, even though the logistics portfolio began as a pilot program for portfolio management 2 years ago. We believe DOD portfolio managers need to take a broader perspective on airlift issues to ensure that the appropriate amount of attention and resources are available to address the most pressing issues for new and legacy programs and to avoid unnecessary expenditure of funds for modernizations or acquisitions. Therefore, we do not believe that DOD’s response adequately addresses our recommendation. The department agreed with our second recommendation on the need to develop a plan for strategic airlift that identifies the number of C-5s that will be retired and the number of additional C-17s, if any that might be needed. In its comments, DOD stated that the Secretary of the Air Force, in coordination with the Office of the Secretary of Defense (Cost Assessment and Program Evaluation) and U.S. Transportation Command has already developed this plan based on the current level of congressional funding for the C-17 and preliminary results of the Mobility Capabilities Requirements Study 2016 and the Quadrennial Defense Review. Specifically, DOD officials believe an adequate number of C-17s have been procured to cover all necessary missions to satisfy the National Defense Strategy and will retire some C-5s. We were not provided any details about this plan for strategic airlift, the ongoing mobility study or the Quadrennial Defense Review to comment on the adequacy of the analysis, but believe that a thorough analysis is needed for senior leaders to make sound investment decisions. We are concerned about the adequacy of the plan because during the course of our review, DOD officials told us that the Mobility Capabilities Requirements Study 2016 does not specifically quantify the use of the C-17 in a tactical role or evaluate the impact on its service life resulting from the increased use in that regard. In 2007, over 20 percent of the C-17 missions were for tactical missions and this could grow given that it is the only aircraft that is capable of moving certain types of equipment within a theater of operations that are too large or bulky for the C-130. Further, it is unclear whether DOD has identified how many C-5s need the AMP modification since additional C-17s are being procured or when and how many C-5s will be retired. In addition, we previously reported on deficiencies in how DOD conducted its previous mobility capabilities study and we do not know if DOD has addressed these flaws in the current study. As a result, we do not know the extent to which the new study will provide clear answers for senior leaders regarding strategic and tactical airlift or engender more questions. DOD commented that it believes it has fulfilled the requirements for our third and fourth recommendations by recently tasking the Air Force and Army to determine the appropriate mix of C-27Js and C-130s to perform Army time-sensitive, mission-critical requirements and common user pool requirements, as well as develop plans detailing how Army requirements will be prioritized against Air Force priorities. These are good first steps. However, the plans are still in development and, according to an Air Force briefing to the Deputy’s Advisory Working Group, more work needs to be done. Critical details, including a concept for employment, a final basing plan, and a decision on the maintenance concept will have to be worked out over the next several years. These issues have also generated much debate within the department and in Congress concerning aircraft quantities and employment strategies. As we stated earlier, the Air Force has historically had trouble balancing Army priorities with its own and, according to the Secretary of Defense, the Air Force will need to change its culture to successfully meet both requirements. In addition to completing the plans, we believe DOD may need to exert sustained oversight by senior leaders, including the portfolio management team, to ensure the Air Force is able to perform these missions over the long-term. Finally, DOD believes that it has fulfilled the requirement for our fifth recommendation related to using an evolutionary approach for developing the JFTL that includes selecting mature technologies, developing increments in less than 5 years, and fully funding each increment. DOD stated that the Air Force and Army are currently engaged in approving a JFTL initial capabilities document and commencing with a formal analysis of alternatives to consider all viable options for addressing capability gaps. We believe these start-up actions are appropriate and, if accomplished according to policy, should provide a solid foundation to inform subsequent decisions for a new weapon system acquisition program. Our recommendation, however, is geared not only to these initial planning steps but also looking forward to the smooth transition to system development and effective acquisition program management. This recommendation will take several steps and years to complete, and we believe senior leaders, including the portfolio management team, need to ensure that the JFTL program has a solid business case at the start of development with mature technologies, adequate funding, and an incremental plan for development. Our previous work on many other weapon systems programs has shown that without these, programs are likely to encounter significant cost and schedule growth that will, if realized on the JFTL program, impact the department’s ability to move medium weight equipment within a theater of operations directly to the warfighter. It may also have an impact on the C-17 program as these aircraft may be used more frequently than planned for tactical missions. We therefore believe that DOD will need to take additional steps to be fully responsive to this recommendation. We are sending copies of this report to the Secretary of Defense and interested congressional committees. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Bruce Fairbairn, Assistant Director; Cheryl Andrew; Marvin Bonner; Andrew Redd; Kristine Heuwinkel; and Robert Swierczek. This appendix provides more details on strategic and tactical airlift new and modernization programs to expand upon summary information provided in the body of this report. We include a brief description of each aircraft’s mission, program status, and our observations on upcoming program decisions. Where applicable, we highlight our recent work on some systems. The appendix also includes a funding table for each aircraft. Because the fiscal year 2010 budget did not include funding projections beyond fiscal year 2010, we used information from the Fiscal Year 2010 Defense Budget for funding data related to fiscal years 2008 through 2010 and the Fiscal Year 2009 Defense Budget for fiscal years 2011 through 2013 when possible. Since the C-5 Reliability Enhancement and Reengining Program (RERP) and C-27J programs were restructured, we relied on information from the Air Force for fiscal years 2011 through 2013 data. The budget information in each table is expressed in current (then year) dollars and the totals may not add exactly because of rounding. The Department of Defense (DOD) uses a mix of modernized C-5s, which were manufactured 30 to 40 years ago, and newer C-17s to complete the strategic airlift mission. Both strategic airlifters possess intercontinental range with aerial refueling and can carry weapons and equipment too large for any other DOD aircraft. Each also has some complementary characteristics that favor a mixed fleet. The larger C-5 can carry more cargo than the C-17 and is the only aircraft capable of handling some equipment, such as the Army’s 74-ton mobile scissors bridge. The C-17 is more modern, has a higher mission capable rate, and is more flexible in that it also provides tactical airlift to forward-deployed bases. Figure 2 compares the two strategic airlifters. The C-5 is one of the largest aircraft in the world and is used by DOD for strategic airlift purposes. It can carry outsize and oversize cargo over intercontinental ranges and can take off or land in relatively short distances. With aerial refueling, the aircraft’s range is limited only by crew endurance. The C-5 can carry nearly all of the Army’s combat equipment, including large heavy items such as the 74-ton mobile scissors bridge. Ground crews can load and off-load the C-5 simultaneously at the front and rear cargo openings. The landing gear system permits lowering of the parked aircraft so the cargo floor is at truck bed height to facilitate vehicle loading and unloading. The Air Force acquired a total of 126 C-5s in two production batches. Aircraft designated C-5A were built between 1969 and 1974 and given new wings in the 1980s. Aircraft designated C-5B were built in a second production run in the 1980s. Since then, the Air Force has retired 14 C-5As and 1 C-5B crashed, leaving a total of 111 C-5 aircraft (60 C-5As, 49 C-5Bs, and 2 C-5Cs). In 1999, the Air Force began modernizing its C-5 aircraft. Modifications are intended to improve operational capability while improving flight safety, reliability, and maintainability. The two primary modifications are as follows The Avionics Modernization Program (AMP), which upgrades capabilities, including Global Air Traffic Management, navigation and safety equipment, modern digital equipment, and an all-weather flight control system. The Reliability Enhancement and Reengining Program (RERP), which replaces engines and modifies over 70 electrical, fuel, and other subsystems. Together, these two upgrades were expected to improve the fleet’s wartime mission capable rate to at least 75 percent, thereby increasing payload capability and transportation throughput, and to reduce total ownership costs over the life cycle through 2040 by about $14 billion in 2008 dollars. DOD initially expected to spend about $12 billion on the C-5 AMP and RERP efforts. However, both modernization efforts experienced cost and schedule problems since going into development. AMP development costs increased by approximately 20 percent and would have been higher had the Air Force not reduced requirements and deferred some development activities to other programs. Officials waived 14 operational requirements and deferred the correction of 250 deficiencies identified during testing, many of which will be addressed and funded in RERP or future efforts. In addition, the C-5 RERP experienced a Nunn-McCurdy cost breach. The program was restructured and the Air Force now plans to RERP 52 aircraft—47 C-5 B aircraft, both C-5 Cs, and 3 aircraft that had already been modified during system development and demonstration (two C-5Bs and one C-5A). While the Air Force is expected to spend $3.4 billion (then- year dollars) less under the restructured RERP program, ultimately, less than one-half of the aircraft will be modernized and at a much higher unit cost than originally estimated—$160.5 million for both modifications versus $96.1 million originally estimated in then-year dollars. DOD now expects that the C-5 AMP modification of 112 aircraft and the C-5 RERP modification of 52 aircraft will reduce total ownership costs over the life cycle through 2040 by about $8.9 billion base year 2000 dollars. According to program officials, as of July 2009, 55 of the C-5s have received the AMP modification. The last B model received the modification in August 2009. All focus is now on the A models. Many of the deficiencies found during testing have been corrected. Other deficiencies and waivers will be addressed in the RERP program or a planned block upgrade that is slated to begin in fiscal year 2010. According to program officials, only 3 C-5 aircraft used during systems development and demonstration have received the RERP modification thus far. The first production aircraft will enter modification in August 2009. The Air Force has received low rate initial production approval for the first 3 lots, totaling 9 aircraft. The full rate production decision is scheduled for December 2010. It is unclear whether the Air Force is going to adequately fund the restructured C-5 RERP program because the fiscal year 2010 budget does not include funding details for the program through 2015. Further, program officials could only comment on the fiscal year 2010 budget. On the basis of the fiscal year 2009 budget however, DOD’s Cost Analysis Improvement Group concluded that the restructured C-5 RERP program was underfunded by about $294 million then-year dollars across the Future Years Defense Plan for fiscal years 2009 through 2013. Approximately $250 million then-year dollars less is needed in fiscal years 2009 through 2011, and $544 million then-year dollars more is needed in fiscal years 2012 and 2013. DOD officials stated that if the budget is not sufficient to meet agreed-upon quantities, then anticipated price breaks would not occur, resulting in increased cost to the program and government. In June 2009, the Air Force was granted authority by Congress to begin retiring C-5A aircraft. Air Mobility Command officials told us that fiscal and personnel demands require that the command limit overall fleet size once warfighting risk is reduced to a reasonable level. Therefore, the Air Mobility Command will consider retiring C-5s, as the law and requirements allow, on a one-for-one basis after 205 C-17s have been procured, to ensure the right combination of aircraft and capability is balanced against cost and risk. According to program officials, operational testing for an A model will take place between October 2009 and January 2010. The final report will be issued in July 2010. A decision on whether and when to retire C-5s will not likely be made until after the Mobility Capabilities and Requirements Study 2016 has been completed. If DOD decides to retire C-5A aircraft, it may not need to provide the AMP modification to all of its C-5 fleet. The Air Force plans to have 40 of the 60 C-5A AMP modification kits on contract by the end of 2009 and at least 8 C-5A models will have actually received the modification by that time. The C-17 is a multi-engine, turbofan, wide-body aircraft that improves the overall capability of the United States Air Force to rapidly project, reinforce, and sustain combat forces worldwide. It is used by DOD for both strategic and tactical missions. For example, the C-17 is capable of rapid strategic delivery of troops and all types of cargo to main operating bases or directly to forward bases in the deployment area. The aircraft can perform tactical airlift and airdrop missions and can also transport ambulatory patients during aeromedical evacuations when required. The inherent flexibility and performance of the C-17 force improve the ability of the total airlift system to fulfill the worldwide air mobility requirements of the United States. The Air Force originally planned to procure 120 C-17s, with the last one being delivered in November 2004. The Air Force’s current plans are to acquire a total of 213 C-17s for $68 billion, with the last one being delivered in March 2011. The Air Force has taken delivery of 190 aircraft through July 2009. This includes one aircraft that is dedicated to provide airlift capability to a consortium of European nations, effectively setting the Air Force’s operational force at 212. The Air Force has a number of ongoing improvement efforts for the C-17, including improving C-17 airdrop system operations, integrating an advanced situational awareness and countermeasures system, upgrading mission planning by integrating a new joint precision airdrop system, replacing the core integrated computer processor, and providing advanced defensive capability. In recent years, the two prominent issues surrounding the C-17 program have been determining how many C-17s are needed to meet strategic airlift requirements as well as determining when to begin shutting down the C-17 production line. Following a C-5 RERP restructuring in 2008, the U.S. Transportation Command identified a need for 205 C-17s, 25 more than were authorized at the time the 2005 Mobility Capabilities Study was completed. Subsequent to the study, Congress provided additional funding that the Air Force used to procure 10 more C-17s in 2007, 15 more in 2008, and 8 more in 2009, bringing the total that will now be procured to 213. According to Air Mobility Command officials, the command will consider retiring C-5s, as the law and requirements allow, on a one-for-one basis after 205 C-17s have been procured, to ensure the right combination of aircraft and capability is balanced against cost and risk. According to program officials, a decision when to shut down the C-17 production line along with the associated costs has not been finalized. In our November 2008 report we reported that plans called for the C-17 production line to shut down in September 2010. This was based on the Air Force acquiring 205 aircraft. Now that the Air Force will be acquiring 213 aircraft, the last delivery is now expected to be in March 2011. We also reported that the total cost to shut down the line has not been determined. The Air Force estimated the costs to shut down production to be $465 million whereas Boeing (the prime contractor) estimated $1 billion. Officials reported that while the Air Force and Boeing continue to negotiate the final cost to shut down the C-17 production line, the Air Force did include $91 million in its fiscal year 2010 President’s budget submission to begin these activities. According to a DOD official, the C-17s are currently being employed to fill a capability gap existing in the department’s ability to airlift medium- weight vehicles within a theater of operations using dedicated tactical airlifters. DOD officials do not consider the C-17 to be a viable long-term solution as it cannot access short, austere, or unimproved landing areas in close proximity to combat operations. The JFTL is expected to provide this long-term solution; however, the JFTL is not expected to be available until 2024. As of April 2009, DOD’s tactical airlift fleet consisted of 92 C-130E aircraft, 268 C-130Hs, 53 C-130Js, and 2 C-27Js—a total of 415 aircraft. DOD plans to retire its aging C-130E fleet by the end of fiscal year 2014, and according to its Air Mobility Master Plan, looks to meet its tactical airlift needs with a mix of approximately 406 C-130H and C-130J airlifters through the end of the next decade. The Army and Air Force are working on concepts for the Joint Future Theater Lift (JFTL)—an eventual replacement for the C-130H that is projected to be capable of carrying most of the Army’s large vehicles into forward operating locations, which C-130s currently cannot do. Additionally, the Joint Requirements Oversight Council has validated the Army’s time-sensitive, mission-critical resupply requirements that provide the basis for the Joint Cargo Aircraft program to procure 38 C-27Js. These missions are comprised of relatively small payloads that are needed in forward locations within tight time frames. Table 8 compares the capabilities of the C-130H, C-130J-30, and C-27J airlifters. The C-130 is the principal combat delivery aircraft for the U.S. military and is employed primarily as a tactical airlift aircraft for the transport of cargo and personnel within a theater of operation. C-130s also have the capability to augment strategic airlift forces, as well as support humanitarian, peacekeeping, and disaster relief operations. The C-130J is the latest addition to DOD’s fleet of C-130 aircraft, providing performance improvements over legacy aircraft in the series. Variants of the C-130J are being acquired by the Air Force, Marine Corps, Coast Guard, and several foreign militaries to perform their respective missions. The C-130E and C-130H fleets are nearly 30 years old and have serious reliability, maintainability, and supportability issues, and some are reaching the end of their service life. For example, aircraft maintainers discovered severe cracking in the center wing box on some aircraft early in fiscal year 2005. The program office recommended retiring or grounding aircraft with more than 45,000 flying hours, and restricting aircraft with more than 38,000 hours from flying with cargo or performing tactical maneuvers. In response to these recommendations, the Air Force is using some operations and maintenance funding to extend the service life of some C-130 aircraft by 3 to 5 years, including part of the C-130E fleet, which the Air Force plans to retire by the end of fiscal year 2014. In addition, the Air Force is currently funding the replacement of the center wing box on older C-130 aircraft, and plans to replace the wing structure on the remainder of the C-130H fleet in a later phase of the program. The cost of the replacement is approximately $6.5 million per aircraft, and according to Air Force officials, the program is meeting all cost, schedule, and performance goals. The Air Force also has several other modification efforts underway for the C-130H fleet that will address known capability shortfalls. Efforts include a Large Aircraft Infrared Countermeasures program, a Surface-to-Air Fire Look-out Capability modification, and a number of communications upgrades. The largest modernization effort is the Avionics Modernization Program (AMP) to standardize cockpit configurations and avionics, as well as provide for increased reliability, maintainability, and sustainability. Initially, the Air Force planned to upgrade all C-130E and C-130H aircraft, including special operations aircraft. However, after the program entered system development in 2001, it experienced funding instability and hardware and software integration issues. These problems, as well as an Air Force decision to retire C-130E aircraft, triggered a Nunn-McCurdy cost breach in February 2007. The program was subsequently restructured to include far fewer aircraft—221 instead of 519—and assume less developmental risk. Under the revised plan, only a portion of the C-130H fleet would receive the modification. Since that time, the program’s production decision has been delayed 13 months because of documentation and software integration problems and senior leadership concerns about the program’s acquisition strategy. A low rate production decision has not been scheduled as the department is considering another program restructure. Program officials further stated that a second phase of the AMP is now being considered that will modernize C-130s not included in the first phase. DOD is in the process of procuring 168 C-130J airlifters to replace the retiring C-130E fleet. According to program officials, as of July 2009, 73 C-130J aircraft have been delivered of 117 on contract. One program official said all C-130J aircraft currently being purchased by the Air Mobility Command are the C-130J-30 model which, compared to the base model, has an extended fuselage and is capable of carrying 2 additional cargo pallets, for a total of 8 pallets. The C-130J fleet is also receiving a number of upgrades to meet communications, navigation, and surveillance, as well as air traffic management requirements. These efforts are being funded and developed in partnership with other countries as part of the International Cooperative Block Upgrade Program. A C-130J program official reports that aircraft availability rates continue to exceed the fleet standard and are better than rates for C-130H models. Recently, the Secretary of Defense testified that DOD could use “uncommitted” C-130 aircraft to complement C-27Js in order to fulfill Army time-sensitive, mission-critical requirements. However, according to an Air Force official, the impact to the C-130 fleet of supplementing C-27Js in direct support missions is not clear, including how it would affect C-130 availability for other missions. The Air Force has drafted a concept of employment for direct support of Army time-sensitive, mission-critical missions that addresses a number of coordination issues between the services, but the potential impact of these missions on the C-130 fleet has not been assessed, such as fuel costs, maintenance to address potential wear on landing gear and other components, and addressing flight restrictions for runway length. The C-27J Spartan is a mid-range, multifunctional aircraft. Its primary mission is to provide on-demand transport of time-sensitive, mission- critical supplies and key personnel to forward deployed Army units, including those in remote and austere locations. It can also be used for humanitarian relief and homeland security efforts. The aircraft is capable of carrying up-armored High Mobility Multipurpose Wheeled Vehicles and heavy, dense loads such as aircraft engines and ammunition. The Joint Cargo Aircraft program began in late 2005 when the Under Secretary for Acquisition, Technology and Logistics directed the Army and Air Force to merge their requirements for small intra-theater airlifters. In June 2007 the Under Secretary of Defense for Acquisition, Technology and Logistics issued an Acquisition Decision Memorandum certifying the program with approval to proceed to low rate initial production. This memorandum set the acquisition program baseline at 78 aircraft: 54 for the Army and 24 for the Air Force. The Army primarily viewed the C-27J as on- call airlift directly tied to the tactical needs of ground commanders, sometimes referred to as transporting cargo the “last tactical mile.” The Air Force planned to use its C-27J assets to provide “general support” airlift for all users, but also views the delivery of time-sensitive, mission- critical Army cargo as its role. The joint Army/Air Force program office selected the C-27J as the Joint Cargo Aircraft in a full and open competition and awarded a firm-fixed price contract to L-3 Communications, Integrated Systems in June 2007. Two aircraft of a total of 13 the Army has ordered through fiscal year 2009 have been delivered and according to program officials are being used to conduct training and developmental testing. In May 2009, as part of budget deliberations, the Army and Air Force Chiefs of Staff agreed to transfer responsibility for the C-27J program to the Air Force, along with the task of fulfilling the Army’s time-sensitive, mission-critical resupply mission. As part of this restructuring, program quantities were reduced by about 50 percent, from 78 to 38 aircraft. The 13 ordered aircraft, including the 2 already delivered, will be transferred to the Air Force, who will procure an additional 25 aircraft between 2010 and 2012. C-27J aircraft are currently built in Turin, Italy. Manufacturer Alenia Aeronautica (primary sub contractor to L-3 Communications, Integrated Systems) had planned to break ground on a manufacturing facility in Jacksonville, Florida, in April 2009, but according to an Alenia Aeronautica official, this decision has been presently postponed. According to program officials, Alenia Aeronautica had planned to assemble C-27J aircraft 16 through 78 at the Jacksonville facility, in addition to those ordered by foreign customers. With DOD’s decision to procure fewer aircraft, it is unclear whether Alenia will proceed with construction of the facility. The Air Force has offered some insight into how it will meet the Army’s time-sensitive, mission-critical resupply requirement and is in the process of further developing concepts of operation and employment for the C-27J. Although the service is buying only 38 C-27J aircraft, it is investigating possibilities for fulfilling the direct support mission requirement at least in part from a common user pool fleet construct. For example, an Air Force official said C-130s are already used for some time-sensitive, mission- critical operations. The Secretary of Defense has indicated that the 38 C- 27Js can be complemented by any of about 200 “uncommitted” C-130s, which he noted can access 99 percent of the landing strips that C-27Js can access. However, it is unclear if or how such an approach will affect the number of C-130Js the service plans to buy, or the availability of C-130 aircraft to meet other requirements associated with major combat operations. The Mobility Capabilities and Requirements Study 2016 may help shed light on this issue. There is also concern about the Air Force’s commitment to direct support of the time-sensitive, mission-critical requirement. Over the past several decades, the Air Force has retired its direct support assets, including the Vietnam-era C-7 Caribou and an earlier version of the C-27. At issue are basic roles and missions philosophies which DOD recognizes need to be updated to reflect lessons learned in ongoing combat operations. The Secretary of Defense testified in May 2009 that there needs to be a change in the Air Force’s culture with respect to how the direct support mission is accomplished. Similarly, the department’s Quadrennial Roles and Missions Review Report for January 2009 notes that the services need to standardize the airlift process by sharing aircraft employment and availability data and adjust concepts of operations to allow traditionally general support assets to be used for direct support and vice versa. However, the Quadrennial Roles and Missions Review Report also determined that the service responsibilities for intratheater airlift operations were appropriately aligned and the option that provided the most value to the joint force was to assign the C-27J to both the Air Force and the Army. An Air Force official said the service has drafted a platform- neutral concept of employment for direct support of the time-sensitive, mission-critical mission. The vision is to use the capabilities of the entire mobility airlift fleet (i.e., C-130, C-17, C-5, Operational Support Airlift) to supplement the 38 C-27Js as required in time-sensitive, mission-critical operations abroad. While the Mobility Capabilities and Requirements Study 2016 and other studies consider tactical airlift requirements into the future, officials involved with the study have not indicated that they address the impact of potential departures from traditional roles and missions constructs—such as changing how the services will approach time-sensitive, mission-critical resupply. As such, it is not known how these changes may affect overall requirements for tactical airlifters. Moreover, there is speculation that the 2010 Quadrennial Defense Review will establish priorities based on one major combat operation, rather than two simultaneous ones. Considered together, these points raise the question of how many C-27Js DOD needs. DOD plans to replace C-130H aircraft and augment the remaining C-130s with the Joint Future Theater Lift (JFTL). Currently, it is still at the conceptual stage and is not yet a formal acquisition program. The Army and Air Force have independently engaged in laboratory efforts to develop competitive technology solutions including a large tiltrotor, vertical takeoff and landing aircraft, and a versatile fixed wing, short takeoff and landing aircraft, respectively. A draft Initial Capabilities Document notes that the JFTL must be capable of transporting current and future medium- weight armored vehicles into austere locations with unprepared landing areas. According to an Army official, another capability under investigation is the ability to operate from naval vessels (seabasing) to enhance access to remote areas and to reduce predictability. The JFTL is anticipated to have a payload capacity of 20 to 36 tons and a combat mission radius of 500 nautical miles. The Air Force Air Mobility Command expects the JFTL to be fielded sometime around 2024. JFTL concept development became a joint effort in January 2008 following a decision by the Army and Air Force Chiefs of Staff to merge requirements for separate heavy lift efforts in progress at the time. The Air Force was designated as the administrative lead for the development of the Initial Capabilities Document for the JFTL, and submitted a draft into DOD’s Joint Capability Integration and Development System earlier this year; however, the Army did not agree with the draft, citing critical disagreements. According to an Army program official, a recent general officer meeting between the two services appears to have resolved the Army’s remaining critical comments, and the services could potentially seek approval of the Initial Capabilities Document at the Joint Requirements Oversight Council by late summer 2009. Both Army and Air Force officials stated they would like to have the Initial Capabilities Document validated and begin work on the analysis of alternatives in the summer of 2009, to provide a sufficient basis for budget deliberations in March 2010. Disparate views on requirements are at the heart of the disagreement between the services. According to an Army official, there were foundational differences in anticipated usage of the aircraft that led to initial disagreements between the services. The land component (i.e., the Army, Marine Corps, and special operations forces) saw a critical need for an airlift capability that would enable expeditionary, mounted (i.e. forces deployed with combat vehicles) ground operations into access-challenged environments. The airlift community was pursuing a larger, longer range transport to better meet the current set of traditional airlift missions. The Army official said the two perspectives resulted in different technologies and system investigations. The land component, led by the Army, has been pursuing vertical takeoff and landing concepts that are less infrastructure- constrained, allow faster force buildup, and can more easily sustain maneuvering forces from either land or sea bases. The Air Force has been pursuing advanced lift system technology for turbofan fixed wing aircraft to improve operations on short, soft, or rough airfields while increasing cruise speed over current tactical transports. However, the Army official said development of the JFTL Initial Capabilities Document has combined these perspectives into one requirements document and served to converge the services into a more cohesive vision of future operations. Both the Army and Air Force have continued to fund technology development efforts that support their previously separate programs. Army technology development efforts are focused on a high-efficiency tiltrotor concept that could become a candidate for the JFTL once requirements are established. According to an Army lab official, the aircraft would be nearly as aerodynamically efficient as a fixed wing aircraft and would have about the same fuel efficiency as a C-130J. While the concept is still “all on paper,” the official said no new inventions are needed—that the component technologies all have an existing lineage and could be practically implemented on an aircraft of the size anticipated (the maximum payload would be 36 tons). The Army has three contractors or contractor teams working on different tiltrotor configurations that could potentially meet the joint capability needs. A number of technology development/risk reduction efforts, including a tiltrotor test rig and a number of specialized studies, have been funded by the Army, Special Operations Command, National Aeronautics and Space Administration, Defense Advanced Research Projects Agency, and Office of Naval Research. An Air Force official said the service’s technology development efforts are focused on a fixed wing concept that combines speed and agility to provide enhanced lift for short takeoffs. According to the Air Force official, three contractors have done work on this speed agile concept, with one—Lockheed Martin—on contract to develop a demonstrator model. The Air Force Research Laboratory has also, in partnership with Lockheed Martin, developed the Advanced Composite Cargo Aircraft, which utilizes composite materials in the fuselage and tail, and which completed a successful test flight in June 2009. An Air Force Research Lab official said this technology significantly reduces the number of parts needed, as well as tooling and touch labor needs in the manufacturing process. He said these processes and materials could potentially be used for the JFTL. A potential capability gap exists in the department’s ability to airlift medium-weight vehicles to access-challenged areas within a theater of operations using dedicated tactical airlifters. C-17 aircraft have been employed to transport medium weight vehicles in theater, but cannot access austere, short, or unimproved landing areas. In 2007 C-17s flew 15,436 tactical sorties, 3,102 of which—approximately 20 percent— involved carrying objects too large for a C-130 to carry. Nevertheless, DOD officials do not consider the C-17 to be a viable long-term solution given access issues noted above. JFTL is expected to provide this long-term solution. We believe the JFTL effort presents the department an opportunity to address a critical capability gap using the evolutionary, knowledge-based approach outlined in DOD acquisition policy. However, DOD officials will need to exercise caution to avoid pitfalls we have identified in connection with developing new weapon systems, including taking a revolutionary versus an evolutionary approach for weapon system development; overpromising performance capabilities; and understating expected costs, schedules, and risks associated with developing and producing the weapon. Fielding the new capability may be a challenge for two reasons. First, although the services have reached agreement on operational requirements in developing the Initial Capabilities Document, the potential exists for future disagreements that could adversely affect program outcomes. The Army would like a tiltrotor aircraft that can be used in direct support of its maneuver and sustainment operations, and the Air Force favors a fixed wing aircraft to support common-user needs as well as the Army’s direct support mission. An Army official said the decision to pursue a tiltrotor or a fixed wing aircraft will be made during the analysis of alternatives, and that he expected a more cooperative relationship between the services once that is decided. However, we feel that if such a relationship does not emerge or continue throughout system development, program outcomes could be jeopardized. Our previous work has found that unstable requirements in conjunction with long development cycles can lead to considerable cost growth and schedule delays. Second, the JFTL was intended to transport medium-weight vehicles, including Future Combat Systems vehicles; however, DOD recently cancelled the manned ground vehicle portion of the program with plans to re-launch a new vehicle modernization program incorporating lessons learned in recent operations in Iraq and Afghanistan. We believe the design of the new vehicles, including size and weight, could be an important factor in determining the type of aircraft best suited for the JFTL mission, primarily because the Army’s tiltrotor concept already envisions a rotorcraft much larger than any ever produced. However it could be several years before the Army has a good understanding of the size and weight of the new vehicles. Defense Acquisitions: Charting a Course for Lasting Reform. GAO-09-663T. Washington, D.C.: April 30, 2009. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-09-326SP. Washington, D.C.: March 30, 2009. Defense Acquisitions: Timely and Accurate Estimates of Costs and Requirements Are Needed to Define Optimal Future Strategic Airlift Mix. GAO-09-50. Washington, D.C.: November 21, 2008. Defense Transportation: DOD Should Ensure that the Final Size and Mix of Airlift Force Study Plan Includes Sufficient Detail to Meet the Terms of the Law and Inform Decision Makers. GAO-08-704R. Washington, D.C.: April 28, 2008. Best Practices: An Integrated Portfolio Management Approach to Weapon System Investments Could Improve DOD’s Acquisition Outcomes. GAO-07-388. Washington, D.C.: March 30, 2007. Defense Transportation: Study Limitations Raise Questions about the Adequacy and Completeness of the Mobility Capabilities Study and Report. GAO-06-938. Washington, D.C.: September 20, 2006.
Department of Defense (DOD) used nearly 700 aircraft, as well as commercial and leased aircraft, to carry about 3 million troops and 800,000 tons of cargo in support of wartime, peacetime, and humanitarian efforts in 2008. C-5s and C-17s move troops and cargo internationally (strategic airlift) and C-130s are the primary aircraft that moves them within a theater of operation (tactical airlift). Over the next 4 years, DOD plans to spend about $12 billion to modernize and procure airlifters and is currently studying how many it needs. The Government Accountability Office (GAO) was asked to (1) identify the status of DOD's modernization and acquisition efforts and (2) determine how well DOD is addressing any capability gaps and redundancies. In conducting this work, GAO identified the cost, schedule, and performance of airlift programs, as well as DOD's plan for addressing gaps and redundancies. GAO also discussed mobility study efforts with DOD, Institute for Defense Analysis (IDA), and RAND Coporation officials. DOD has recently revamped airlift investments due to modernization cost increases and requirement changes. For strategic airlift, the number of C-5s that will be fully modernized were cut in half because of substantial reengining cost increases and C-17 quantities were increased from 180 to 213 aircraft. These twin changes resulted in a net cost increase of about$3 billion. Additional costs and force structure changes are possible pending decisions on C-5 retirements, other modifications, the potential need for more C-17s to meet tactical airlift needs, and the planned shutdown of C-17 production. For tactical airlift, substantial cost increases for modernizing C-130 avionics tripled unit costs, delayed its schedule, and resulted in almost 60 percent fewer aircraft being modernized. There have been large increases in the C-130J quantity to replace older C-130s, but modest increases in unit costs. The joint Army-Air Force C-27J program was recently transferred to the Air Force and quantities were cut from 78 to 38 aircraft, with an uncertain effect on the Army's airlift missions. The Army and Air Force must also resolve fundamental differences in operating requirements and employment strategy for the Joint Future Theater Lift (JFTL). DOD appears to have addressed its strategic airlift gap, but there is a potential future tactical airlift gap for moving medium weight equipment. Also, questions regarding how the Air Force will meet the Army's direct support mission have not been resolved. DOD is using $5.5 billion appropriated by Congress to procure 23 additional C-17s, which DOD officials believe more than offsets the strategic airlift gap associated with the restructured C-5 modernization program. However, there is a potential gap in the tactical airlift of medium weight loads beyond the capability of the C-130s. The C-17 is the only aircraft capable of moving this type of Army equipment within a theater of operation, although not to austere, short, or unimproved landing areas. The JFTL is envisioned to provide this capability, but will not be available for 15 years or more under the current acquisition strategy. While the various mobility studies acknowledge the C-17's significant dual role, they did not comprehensively evaluate the expanded use of the C-17 to transport medium weight equipment in theater and how this could impact the force structure, the C-17's service life, and decisions related to when to shut down the production line. In addition, questions remain about the number of C-130s and C-27Js needed to fulfill Army direct support missions. Two studies reached somewhat different conclusions about the cost effectiveness of using C-130Js and C-27Js for this mission. The Air Force and Army have not completed a plan for meeting Army direct support requirements, which could affect future decisions on both the C-27J and the C-130J. DOD's recently established portfolio management structure is supposed to provide a useful forum to address the broad range of airlift investment decisions. However, efforts so far have primarily focused on new programs rather than addressing gaps and making other airlift decisions such as when and how many C-5s to retire or the appropriate mix of C-130s and C-27Js needed to perform Army missions.
The IDES process begins at a military treatment facility when a physician identifies one or more medical conditions that may interfere with a servicemember’s ability to perform his or her duties. The process involves four main phases: the Medical Evaluation Board (MEB), the Physical Evaluation Board (PEB), transition out of military service (transition), and VA benefits. MEB phase: In this phase, medical examinations are conducted and decisions are made by the MEB regarding a servicemember’s ability to continue to serve in the military. This phase involves four stages: (1) the servicemember is counseled by a DOD board liaison on what to expect during the IDES process; (2) the servicemember is counseled by a VA caseworker on what to expect during the IDES process and medical exams are scheduled; (3) medical exams are conducted according to VA standards for exams for disability compensation, by VA, DOD, or contractor physicians; and (4) exam results are used by the MEB to identify conditions that limit the servicemember’s ability to serve in the military. Also during this stage, a servicemember dissatisfied with the MEB assessment of unfitting conditions can seek a rebuttal, or an informal medical review by a physician not on the MEB, or both. PEB phase: In this subsequent phase, decisions are made about the servicemember’s fitness for duty, disability rating and DOD and VA disability benefits, and the servicemember has opportunities to appeal those decisions. This includes: (1) the informal PEB stage, an administrative review of the case file by the cognizant military branch’s PEB without the presence of the servicemember; (2) VA rating stage, where a VA rating specialist prepares two ratings—one for the conditions that DOD determine made a servicemember unfit for duty, which DOD uses to provide military disability benefits, and the other for all service- connected disabilities, which VA uses to determine VA benefits. In addition, the servicemember has several opportunities to appeal different aspects of their disability evaluations: a servicemember dissatisfied with the decision on whether he or she is fit for duty may request a hearing with a “formal” PEB; a member who disagrees with the formal PEB fitness decision can, under certain conditions, appeal to the reviewing authority of the PEB; and a servicemember can ask for VA to reconsider its ratings decisions based on additional evidence, though only for conditions found to render the servicemember unfit for duty. Transition phase: If the servicemember is found unfit to serve, he or she enters the transition phase and begins the process of separating from the military. During this time, the servicemember may take accrued leave. Also, DOD board liaisons and VA case managers provide counseling on available benefits and services, such as job assistance. VA benefits phase: A servicemember found unfit and separated from service becomes a veteran and enters the VA benefits phase. VA finalizes its disability rating after receiving evidence of the servicemember’s date of separation from military service. VA then starts to award monthly disability compensation to the veteran. DOD and VA established timeliness goals for the IDES process to provide VA benefits to active duty servicemembers within 295 days of being referred into the process, and to reserve component members within 305 days (see fig. 1). DOD and VA also established interim timeliness goals for each phase and stage of the IDES process. These time frames are an improvement over the legacy disability evaluation system, which was estimated to take 540 days to complete. In addition to timeliness, DOD surveys servicemembers on their satisfaction at several points in the process, with a goal of having 80 percent of servicemembers satisfied. Enrollment in IDES continued to grow as IDES completed its worldwide expansion. In fiscal year 2011, 18,651 cases were enrolled in IDES compared to 4,155 in fiscal year 2009 (see fig 2). IDES caseload varies by service, but the Army manages the bulk of cases, accounting for 64 percent of new cases in fiscal year 2011. Additionally, active duty servicemembers represent the majority of IDES cases, accounting for 88 percent of new cases in fiscal year 2011. Overall IDES timeliness has steadily worsened since the inception of the program. Since fiscal year 2008, the average number of days for servicemembers cases to be processed and to receive benefits increased from 283 to 394 for active duty cases (compared to the goal of 295 days) and from 297 to 420 for reserve cases (compared to the goal of 305 days). Relatedly, the proportion of cases meeting timeliness goals decreased from more than 63 percent of active duty cases completed during fiscal year 2008 to about 19 percent in fiscal year 2011 (see table 1). When examining timeliness across the four phases that make up IDES, data show that timeliness regularly fell short of interim goals for three— MEB, Transition, and VA Benefits (see fig. 3). For example, for cases that completed the MEB phase in fiscal year 2011, active and reserve component members’ cases took on average of 181 and 188 days respectively to be processed, compared to goals of 100 and 140 days. For the PEB phase, processing times increased over time, but were still within established goals. MEB phase: Significant delays have been occurring in completing medical examinations (medical exam stage) and delivering an MEB decision (the MEB stage). For cases completing the MEB phase in 2011, 31 percent of active and 29 percent of reserve cases met the 45-day goal for the medical exam stage and 20 percent of active case and 17 percent of reserve cases met the 35-day goal for the MEB stage. Officials at some sites we visited told us that MEB phase goals were difficult to meet and not realistic given current resources. At all the facilities we visited, officials told us DOD board liaisons and VA case managers had large case loads. Similarly, some military officials noted that they did not have sufficient numbers of doctors to write the narrative summaries needed to complete the MEB stage in a timely manner. Monthly data produced by DOD subsequent to the data we analyzed show signs of improved timeliness for these two stages: for example, 71 percent of active cases met the goal for the medical exam stage and 43 percent met the goal for the MEB stage in the month of March 2012. However, it is too early to tell the extent to which these results will continue to hold. PEB phase: PEB processing times goals were also not met in fiscal year 2011 for the informal PEB and VA rating stages. For cases that complete the PEB phase in fiscal year 2011, only 38 percent of active duty cases received an informal PEB decision within the 15 days allotted, and only 32 percent received a preliminary VA rating within the 15-day goal. Also during this phase, the majority of time (75 out of the 120 days) is set aside for servicemembers to appeal decisions—including a formal PEB hearing or a reconsideration of the VA ratings. However, only 20 percent of cases completed in fiscal year 2011 actually had any appeals; calling into question DOD and VA’s assumption on the number of expected appeals and potentially masking processing delays in other mandatory parts of the PEB phase. Transition phase: The transition phase has consistently taken longer than its 45-day goal—almost twice as long on average. While processing times improved slightly for cases that completed this phase in fiscal year 2011 (from 79 days in 2010 to 76 days in fiscal year 2011 for active duty cases), timeliness has remained consistently problematic since fiscal year 2008. DOD officials suggested that it is difficult to meet the goal for this phase because servicemembers are taking accrued leave—to which they are entitled—before separating from the service. For example, an Army official said that Army policy allows servicemembers to take up to 90 days of accrued leave prior to separating, and that average leave time was about 80 days. Although servicemember leave is skewing the performance data, officials said that they cannot easily back this time out from their tracking system, but are exploring options for doing so, which would be more reflective of a servicemember’s actual total time spent in the evaluation process. VA benefits phase: Processing time improved somewhat for the benefits phase (48 days in fiscal year 2010 to 38 days in fiscal year 2011), but continued to exceed the 30-day goal for active duty servicemembers. Several factors may contribute to delays in this final phase. VA officials told us that cases cannot be closed without the proper discharge forms and that obtaining these forms from the military services can sometimes be a challenge. Additionally, if data are missing from the IDES tracking system (e.g., the servicemember already separated, but this was not recorded in the database), processing time will continue to accrue for cases that remain open in the system. Officials could not provide data on the extent to which these factors had an impact on processing times for pending cases, but said that once errors are detected and addressed, reported processing times are also corrected. In addition to timeliness, DOD and VA evaluate IDES performance using the results of servicemember satisfaction surveys. However, shortcomings in how DOD measures and reports satisfaction limit the usefulness of these data for making IDES management decisions. Response rates: Survey administration rules may unnecessarily exclude the views of some servicemembers. In principle, all members have an opportunity to complete satisfaction surveys at the end of the MEB, PEB, and transition phases; however, servicemembers become ineligible to complete a survey for either the PEB or transition phases if they did not complete a survey in an earlier phase. Additionally, by only surveying servicemembers who completed a phase, DOD may be missing opportunities to obtain input from servicemembers who exit IDES in the middle of a phase. Alternate measure shows lower satisfaction: DOD’s satisfaction measure is based on an average of responses to questions across satisfaction surveys. A servicemember is defined as satisfied if the average of his or her responses is above 3 on a 5-point scale, with 3 denoting neither satisfied nor dissatisfied. Using an alternate measure that defines servicemembers as satisfied only when all of their responses are 4 or above, GAO found satisfaction rates several times lower than DOD’s calculation. Whereas DOD’s calculation results in an overall satisfaction rate of about 67 percent since the inception of IDES, GAO’s alternate calculation resulted in a satisfaction rate of about 24 percent. In our ongoing work, we will continue to analyze variation in satisfaction across servicemember cases using both DOD’s and GAO’s measures of satisfaction. In our ongoing work, we will continue to assess survey results and their usefulness for measuring performance. In the meantime, DOD is reconsidering alternatives for measuring satisfaction, but has yet to come to a decision. Officials already concluded that the survey, in its current form, is not a useful management tool for determining what changes are needed in IDES and said that it is expensive to administer—costing approximately $4.3 million in total since the start of the IDES pilot. DOD suspended the survey in December 2011 because of financial constraints, but officials told us they plan to resume collecting satisfaction data in fiscal year 2013. DOD and VA have undertaken a number of actions to address IDES challenges—many of which GAO identified in past work. Some actions— such as increased oversight and staffing—represent important steps in the right direction, but progress is uneven in some areas. Increased monitoring and oversight: GAO identified the need for agency leadership to provide continuous oversight of IDES in 2008, and reported the need for system-wide monitoring mechanisms in 2010. Since then, agency leadership has established mechanisms to improve communication, monitoring, and accountability. The secretaries of DOD and VA have met several times since February 2011 to discuss progress in improving IDES timeliness and have tasked their agencies to find ways of streamlining the process so that the goals can be reduced. Further, senior Army and Navy officials regularly hold conferences to assess performance and address performance issues, including at specific facilities. For instance, the Army’s meetings are led by its vice-chief of staff and VA’s chief of staff, and include reviews of performance where regional and local facility commanders provide feedback on best practices and challenges. Further, VA holds its own biweekly conferences with local staff responsible for VA’s portion of the process. For example, officials said a recent conference addressed delays at one Army IDES site and discussed how they could be addressed. VA officials noted that examiner staff were reassigned to this site and examiners worked on weekends to address the exam problems at this site. Increased staffing for MEB and VA rating: In 2010, we identified challenges with having sufficient staff in a number of key positions, including DOD board liaisons and MEB physicians. DOD and VA are working to address staffing challenges in some of the IDES processes that are most delayed. The Army is in the midst of a major hiring initiative to more than double staffing for its MEBs over its October 2011 level, which will include additional board liaison and MEB physician positions. The Army also plans to hire contact representatives to assist board liaisons with clerical functions, freeing more of the liaisons’ time for counseling servicemembers. Additionally, VA officials said that the agency has more than tripled the staffing of its IDES rating sites to handle the demand for preliminary ratings, rating reconsiderations, and final benefit decisions. Resolving diagnostic differences: In our December 2010 report, we identified differences between DOD physicians and VA examiners, especially regarding mental health conditions, as a potential source of delay in IDES. We also noted inconsistencies among services in providing guidance and a lack of a tracking mechanism for determining the extent of diagnostic differences. In response to our recommendation, DOD commissioned a study on the subject. The resulting report confirmed the lack of data on the extent and nature of such differences, and that the Army has established guidance more comprehensive than guidance DOD was developing on how to address diagnostic differences, and recommended that DOD or the other services develop similar guidance. A DOD official told us that consistent guidance across the services, similar to the Army’s, was included in DOD’s December 2011 IDES manual. Also, in response to our recommendation, VA plans to modify the VTA database used to track IDES to collect this information on cases, although the upgrade has been delayed several times. DOD has other actions underway, including efforts to improve sufficiency of VA examinations, MEB written summaries and reserve component records. We plan to review the status of these efforts as part of our ongoing work, which we anticipate completing later in 2012. DOD and VA are working to address shortcomings in information systems that support the IDES process, although some efforts are still in progress and efforts to date are limited. Improving local IDES reporting capability: DOD and VA are implementing solutions to improve the ability of local military treatment facilities to track their IDES cases, but multiple solutions may result in redundant work efforts. Officials told us that the VTA—which is the primary means of tracking the completion of IDES cases—has limited reporting capabilities and staff at local facilities are unable to use it for monitoring the cases for which they are responsible. DOD and VA have been developing improvements to VTA that will allow board liaisons and VA case managers to track the status of their cases. VA plans to include these improvements in the next VTA upgrade, currently scheduled for June 2012. In the meantime, staff at many IDES sites have been using their own local systems to track cases and alleviate limitations in VTA. Further, the military services have been moving ahead with their own solutions. For instance, the Army has deployed its own information system for MEBs and PEBs Army- wide. Meanwhile, DOD has also been piloting its own tracking system at 9 IDES sites. As a result, staff at IDES sites we visited reported having to enter the same data into multiple systems. Improving IDES data quality: DOD is taking steps to improve the quality of data in VTA. Our analysis of VTA data identified erroneous or missing dates in at least 4 percent of the cases reviewed. Officials told us that VTA lacks adequate controls to prevent erroneous data entry, and that incorrect dates may be entered, or dates may not be entered at all, which can result in inaccurate timeliness data. In September 2011, DOD began a focused effort with the services to correct erroneous and missing case data in VTA. Officials noted that the Air Force and Navy completed substantial efforts to correct the issues identified at that time, but Army efforts continue. While improved local tracking and reporting capabilities will help facilities identify and correct erroneous data, keeping VTA data accurate will be an ongoing challenge due to a lack of data entry controls. DOD and VA are also pursuing options to allow the electronic transfer of case files between facilities. We are reviewing the status of this effort as part of our ongoing work. Based on concerns from the agencies’ secretaries about IDES delays, DOD and VA have undertaken initiatives to achieve time savings for servicemembers. The agencies have begun a business process review to better understand how IDES is operating and identify best practices for possible piloting. This review incorporates several efforts, including, Process simulation model: Using data from site visits and VTA, DOD is developing a simulation model of the IDES process. According to a DOD official, this process model will allow the agencies to assess the impact of potential situations or changes on IDES processing times, such as surges in workloads or changes in staffing. Fusion diagram: DOD is developing this diagram to identify the various sources of IDES data—including VA claim forms and narrative summaries—and different information technology systems that play a role in supporting the IDES process. Officials said this diagram would allow them to better understand and identify overlaps and gaps in data systems. Ultimately, according to DOD officials, this business process review could lead to short- and long-term recommendations to improve IDES performance, potentially including changes to the different steps in the IDES process, performance goals, and staffing levels; and possibly the procurement of a new information system to support process improvements. However, a DOD official noted that these efforts are in their early stages, and thus there is no timetable yet for completing the review or providing recommendations to senior DOD and VA leadership. By merging two duplicative disability evaluation systems, IDES has shown promise for expediting the delivery of DOD and VA benefits to injured servicemembers and is considered by many to be an improvement over the legacy process it replaced. However, nearly 5 years after its inception as a pilot, delays continue to affect the system and their causes are not yet fully understood. Recent initiatives to better understand factors that lead to delays and remedy them are promising, however it remains to be seen what their effect will be. Given the persistent nature of IDES performance challenges, continued attention from senior agency leadership will be critical to ensure that delays are understood and remedied. We have draft recommendations aimed at helping DOD and VA further address challenges we identified, which we plan to finalize in our forthcoming report after fully considering both DOD and VA’s comments. Chairman Murray and Ranking Member Burr, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee may have at this time. For further information about this testimony, please contact Daniel Bertoni at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the individual named above, key contributors to this statement include Michele Grgich, Daniel Concepcion, Melissa Jaynes, and Greg Whitney. James Bennett, Joanna Chan, Douglas Sloane, Vanessa Taylor, Jeff Tessin, Roger Thomas, Walter Vance, Kathleen van Gelder, and Sonya Vartivarian provided key support. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since 2007, the DOD and VA have operated the IDESwhich combines what used to be separate DOD and VA disability evaluation processes and is intended to expedite benefits for injured servicemembers. Initially a pilot at 3 military treatment facilities, IDES is now DODs standard process for evaluating servicemembers fitness for duty and disability worldwide. In previous reports, GAO identified a number of challenges as IDES expanded, including staffing shortages and difficulty meeting timeliness goals. In this statement, GAO discusses initial observations from its ongoing review of the IDES, addressing two key topics: (1) the extent to which DOD and VA are meeting IDES timeliness and servicemember satisfaction performance goals, and (2) steps the agencies are taking to improve the performance of the system. To answer these questions, GAO analyzed IDES timeliness and customer satisfaction survey data, visited six IDES sites, and interviewed DOD and VA officials. This work is ongoing and GAO has no recommendations at this time. GAO plans to issue its final report later in 2012. Case processing times under the Integrated Disability Evaluation System (IDES) process have increased over time, and measures of servicemember satisfaction have shortcomings. Each year, average processing time for IDES cases has climbed, reaching 394 and 420 days for active and reserve component members in fiscal year 2011—well over established goals of 295 and 305 days, respectively. Also in fiscal year 2011, just 19 percent of active duty servicemembers and 18 percent of guard or reserve members completed the IDES process and received benefits within established goals, down from 32 and 37 percent one year prior. Of the four phases comprising IDES, the medical evaluation board phase increasingly fell short of timeliness goals and, within that phase, the time required for the military’s determination of fitness was especially troubling. During site visits to IDES locations, we consistently heard concerns about timeframes and resources for this phase of the process. With respect to servicemember satisfaction with the IDES process, GAO found shortcomings in how these data are collected and reported, such as unduly limiting who is eligible to receive a survey and computing average satisfaction scores in a manner that may overstate satisfaction. Department of Defense (DOD) officials told us they are considering alternatives for gauging satisfaction with the process. DOD and Veterans Affairs (VA) have taken steps to improve IDES performance, and have other improvement initiatives in process, but progress is uneven and it is too early to assess their overall impact. VA increased resources for conducting disability ratings and related workloads. The Army is hiring additional staff for its medical evaluation boards, but it is too early to see the impact of these additional resources. DOD and VA are pursuing system upgrades so that staff and managers at IDES facilities can better track the progress of servicemembers’ cases and respond to delays more quickly; however, multiple upgrades may be causing redundant work efforts. DOD officials also told us they have been working with the military services to correct case data that were inaccurately entered into VA’s IDES tracking system, but have not yet achieved a permanent solution. Finally, DOD is in the early stages of conducting an in-depth business process review of the entire IDES process and supporting IT systems, in order to better understand how each step contributes to overall processing times and identify opportunities to streamline the process and supporting systems.
In 1996, we reported that military units then designated for early deployment faced many of the same chemical and biological defense problems that Gulf War veterans had experienced. During the Gulf War, units and individuals deployed to the theater without all of the chemical and biological detection, decontamination, and protective equipment needed to operate in a contaminated environment. Some units did not have sufficient quantities or the needed sizes of protective clothing, and chemical detector paper, and decontamination kits in some instances had passed their expiration dates. While the 6-month Operation Desert Shield buildup time allowed DOD to correct some of these problems, had chemical or biological weapons been used during this period, some units might have suffered significant, unnecessary casualties. We further reported that DOD’s progress in chemical and biological research and development was slower than planned, training of Army and Marine Corps forces was inadequate, there was little evidence that joint training and exercises included chemical and biological defense elements, stocks of vaccines for biological agents were in short supply, and medical units lacked necessary chemical and biological defense equipment and training. We believe these deficiencies were a result of, and would not be corrected without, changes in emphasis on the part of senior military leadership. We have also reviewed DOD’s ability to protect critical ports and airfields overseas. Although I cannot fully discuss our findings in this open hearing because of their sensitive nature, I can say that there are deficiencies in doctrine, policy, equipment, and training for the defense of critical ports and airfields. The Congress and DOD have taken action that has improved U.S. forces’ ability to survive and operate if chemical and biological agents are used against them. For example, DOD has requested and the Congress has approved increased funding for chemical and biological defense. Numerous efforts are currently underway that should provide our servicemembers with new chemical and biological defense equipment and capabilities over the next 5 years. These include the production and fielding of improved protective masks, body garments, and systems to better detect biological and chemical agents. In addition, several commanders in chief recently increased their emphasis on various aspects of chemical and biological defense by, for example, increasing stocks of chemical defense equipment and incorporating more chemical and biological defense scenarios in major military exercises. Still, DOD must address remaining critical deficiencies that affect its ability to protect forces from chemical and biological attack. DOD’s doctrine and policy are inadequate regarding responsibility for the chemical and biological defense of overseas airfields and ports critical to the deployment, reinforcement, and logistical support of U. S. forces in the event of a conflict. As a result, questions are unresolved regarding the provision of the force structure and equipment needed to protect these facilities. Also, unresolved doctrinal, policy, and equipment questions persist regarding the return of chemically or biologically contaminated strategic lift aircraft and ships and the protection of both essential and nonessential civilians in high-threat areas overseas. Moreover, DOD has insufficient quantities of biological agent vaccines to protect U.S. forces, and servicemembers deployed in high-threat areas overseas normally have no biological agent detection capability. Also, collective protection facilities and equipment and agent detection systems are generally insufficient to protect the force. Anthrax is an infectious disease that afflicts certain animals, especially cattle and sheep. The anthrax vaccine was licensed by the Food and Drug Administration (FDA) in 1970 to protect veterinarians, meat packers, wool workers, and health officials who might come in contact with anthrax. (FDA licensure of a vaccine means that it has been tested and proven to be safe and effective in humans.) The vaccine has been routinely administered to populations at risk for several years. The Chairman of the Joint Chiefs of Staff considers anthrax to be the greatest biological weapons threat to U.S. military forces. After a 3-year study, the Secretary of Defense concluded that vaccination is the safest way to protect U.S. forces against a threat that is 99-percent lethal to unprotected individuals. Accordingly, in December 1997, DOD announced plans to vaccinate all U.S. military personnel (including active, reserve, and national guard servicemembers) against the biological warfare agent anthrax. The Michigan Biologic Products Institute is under contract with DOD to supply the vaccine for the DOD immunization program. While the vaccine will be centrally procured, administering the vaccinations will be decentralized at multiple DOD facilities worldwide. Initially, DOD planned to begin administering the program in the summer of 1998 to about 165,000 servicemembers and DOD mission-essential personnel located in Southwest Asia and Northeast Asia, which are the areas with the greatest biological warfare threat from anthrax. Prior to beginning the immunizations, DOD wanted time to (1) perform testing of the vaccine to ensure its sterility, safety, potency, and purity; (2) implement a system to track personnel who receive the vaccinations; (3) approve plans to administer the immunizations and inform military personnel of the program; and (4) have the program reviewed by an independent expert. However, DOD accelerated the anthrax vaccination schedule. In March 1998, DOD began immunizing forces stationed in the Persian Gulf because of the possibility of hostilities occurring in that region. DOD plans to vaccinate the remaining active and reserve force over the next several years. In addition, DOD plans to decide whether the program should be extended to others, such as host nation personnel, civilian contractors, and dependents. In accordance with the FDA licensure regimen for this vaccine, DOD plans to provide an initial series of three vaccinations at 2-week intervals, a second series of three vaccinations at 6-month intervals, and annual booster vaccinations to maintain immunity against anthrax. DOD recognizes that immunizing the entire force with multiple vaccinations will be difficult and involves significant administrative and logistical issues. DOD’s program will involve administering anthrax vaccinations to about 2.4 million personnel around the world—a total of about 14.4 million vaccinations for the current force. In addition, personnel entering military service will also be immunized. Thus, DOD envisions the program to continue indefinitely. To ensure that all servicemembers receive the required vaccinations, it is important for DOD to have accurate and reliable personnel data systems showing where all servicemembers are located, especially those deployed to overseas locations. Our work in examining the Operation Joint Endeavor medical surveillance program in Bosnia surfaced concerns about the accuracy of the deployment database used for determining which servicemembers required postdeployment medical assessments. More specifically, DOD officials expressed concerns about the accuracy of the DOD-wide database that was used to identify Air Force and Navy personnel who deployed to Bosnia. Air Force officials told us that the Air Force had supplied information to DOD’s database on servicemembers it planned to deploy but that many of them never deployed and the database was not corrected. We were also told that data on servicemembers assigned to two Navy construction battalions that deployed to Bosnia did not appear in the database. DOD officials told us that they were concerned about the accuracy of the deployment database and planned to address the problem. Because DOD plans to administer anthrax vaccinations in a decentralized manner at multiple locations involving both operational and medical personnel, high-level commanders need to emphasize the importance of the program to ensure that it is carried out within the time schedule for administering the vaccinations. Careful attention to the administration of vaccines is critical because the vaccinations must be given at specific intervals over an 18-month period to achieve maximum protection. In the past, a lack of command emphasis hindered DOD’s successful implementation of medical programs. For example, we found that the Army had not done many postdeployment medical assessments of active duty personnel who had deployed to Bosnia. We also found that assessments done were, on average, not done within the 30-day time frame DOD established. Our work disclosed that it took an average of 98 days to complete the assessments. In addition, the Bosnia medical surveillance plan also required servicemembers to undergo a tuberculin test at about 90 days following departure from the theater. Our work disclosed that the test took an average of 142 days. These problems occurred because command officials did not emphasize the importance of the assessments and medical personnel did not have the authority to require servicemembers to go to medical clinics for assessments. Reliance upon unit commanders to require servicemembers to get the assessments was not effective for the Bosnia deployment. Medical records documenting all care (including vaccinations) for servicemembers are essential for the delivery of high-quality medical care. DOD regulations require documentation in a servicemember’s permanent medical record of all immunizations and visits made to health units. The Presidential Advisory Committee on Persian Gulf War Veterans’ Illnesses and the Institute of Medicine reported problems concerning the completeness and accuracy of medical record-keeping during the Gulf War. Research efforts to determine the causes of what has become known as veterans’ Gulf War illnesses have been hampered by, among other things, incomplete medical records showing immunizations and other health services provided to servicemembers while deployed. The Institute of Medicine characterized DOD’s medical records as fragmented, disorganized, and incomplete. We tested the completeness of medical records for selected active duty Army servicemembers who had deployed under Operation Joint Endeavor. We found that many of the medical records were incomplete in that they lacked documentation on (1) medical surveillance assessments conducted, (2) tick-borne encephalitis vaccinations given, and (3) visits made to in-theater health units. More specifically, we found that 19 percent of the postdeployment in-theater medical assessments and 9 percent of the postdeployment home unit medical assessments were not documented in the medical records. These documentation problems were attributed to the fact that this was a paper-based system that relied upon servicemembers to hand carry assessment forms from the theater to their home unit, which maintained the permanent medical record. Regarding the documentation of tick-borne encephalitis vaccine in Bosnia, servicemembers deploying to regions where the threat of this disease was prevalent were given the choice of being inoculated with this investigational drug vaccine. We found that 141 (24 percent) of the 588 medical records reviewed for servicemembers who had received the vaccine lacked required documentation. Our tests of the completeness of the permanent medical records for servicemembers’ visits made to battalion aid stations in Bosnia showed similar problems. Specifically, we found that there was no documentation in the medical records for 44 (29.3 percent) of the 150 visits we reviewed. Army officials mentioned that permanent medical records were still paper-based and that information was subject to being misfiled or lost. They also pointed out that servicemembers had deployed to the theater with only an abstract of their permanent medical records and that any medical documentation generated in the theater was to have been routed back to the servicemembers’ home units for inclusion into their medical records. DOD officials told us that a solution to these documentation problems would be the development of a deployable, computerized patient record. DOD has a project underway to have a paperless computerized medical record for every active duty servicemember by fiscal year 2000. Without an adequate centralized monitoring system, DOD will not have reasonable assurances that the program is being implemented as planned. For Operation Joint Endeavor, DOD established a centralized database to track the services’ progress in implementing its medical surveillance program. Medical units processing medical assessments were required to send copies of assessment forms to the DOD office maintaining the centralized database in the United States. In testing the completeness of the centralized database for in-theater and home unit postdeployment medical assessments conducted for 618 servicemembers, we found that the database understated the number of assessments done. More specifically, it omitted 12 percent of the in-theater medical assessments and 52 percent of the home unit medical assessments. DOD officials told us that they plan to use a new automated system for tracking implementation of the anthrax immunization program from locations around the world. The automated system is still being developed. To ensure that military personnel will receive vaccinations in a timely manner and to effectively manage the program, it is important for DOD to maintain an efficient inventory control system. This system is needed to ensure that (1) sufficient supplies of vaccines will be available at the various worldwide immunization sites; (2) vaccines that are older than their 1-year shelf life are destroyed; and (3) records of vaccines received, administered, and destroyed are kept to allow for monitoring and tracking. For the Bosnia deployment, DOD experienced problems in accounting for the inventory of the tick-borne encephalitis vaccine. DOD had to comply with strict FDA regulations regarding its use because it was still being tested as an investigational new drug. Regulations required DOD to fully account for vaccine inventories, including the number of doses administered and the number of doses destroyed. In the spring of 1996, officials from the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) went to Bosnia to review the procedures being used to administer the tick-borne encephalitis vaccine. These officials found that no record of vaccine disposition was being maintained and recommended that all vaccination sites perform a physical inventory and maintain data on vaccines on hand, used, and destroyed. USAMRIID officials met with considerable resistance from some medical personnel responsible for administering the vaccine about the need to keep proper records. They told us that some of the personnel seemed more interested in administering the vaccine than in keeping necessary records. Our work on the Bosnia deployment in 1997 showed that the problems identified by USAMRIID had not been corrected. More specifically, DOD could not account for more than 3,000 (20 percent) of the total number of doses sent to Bosnia. Since our report was issued in April 1997, officials from the Office of the Army Surgeon General informed us that most of the missing doses had been destroyed and only 242 doses remained unaccounted for. In conclusion, we believe that DOD has moved in the right direction in increasing its emphasis on improving its chemical and biological defense capabilities. Increased emphasis by the commanders in chief in their areas of responsibility, a DOD-wide spending increase leading to increased numbers of fielded chemical and biological detection and protective equipment, and planned procurements of equipment over the next several years will make U.S. forces better prepared to deal with chemical and biological weapons than in the past. However, greater diligence and more action is needed by DOD to maintain progress toward achieving a level of protection for our forces that will enable us to achieve wartime objectives. This latest initiative to immunize the forces against anthrax represents a clear recognition of this threat to U.S. servicemembers. But DOD must overcome past deficiencies in its medical record-keeping practices and make sure supplies of vaccine are available if this new program is to be successful. In this regard, we reiterate that DOD needs to have the means to (1) identify those servicemembers that require immunization, (2) ensure sufficient command emphasis to guarantee that those identified for immunization are immunized, (3) maintain an accurate medical record of immunizations for each servicemember, (4) manage large-scale immunizations through accurate central databases, and (5) ensure that vaccine inventories are appropriately controlled to ensure that sufficient supplies are on hand. This concludes my prepared remarks. We would be happy to respond to any questions the Committee may have. Gulf War Illnesses: Public and Private Efforts Relating to Exposures of U.S. Personnel to Chemical Agents (GAO/NSIAD-98-27, Oct. 15, 1997) . Combating Terrorism: Status of DOD Efforts to Protect Its Forces Overseas (GAO/NSIAD-97-207, July 21, 1997). Gulf War Illnesses: Improved Monitoring of Clinical Progress and Reexamination of Research Emphasis Are Needed (GAO/NSIAD-97-163, June 23, 1997). Defense Health Care: Medical Surveillance Improved Since Gulf War, but Mixed Results in Bosnia (GAO/NSIAD-97-136, May 13, 1997). Chemical and Biological Defense: Emphasis Remains Insufficient to Resolve Continuing Problems (GAO/NSIAD-96-103, Mar. 29, 1996). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Department of Defense's (DOD) continuing efforts to protect U.S. military forces against chemical and biological weapons, including its plan to inoculate all U.S. military forces against anthrax. GAO noted that: (1) in examining DOD's experience in preparing its forces to defend against potential chemical and biological agent attacks during the Gulf War, GAO identified numerous problems; (2) specifically, GAO found: (a) shortages in individual protective equipment; (b) inadequate chemical and biological agent detection devices; (c) inadequate command emphasis on chemical and biological capabilities; and (d) deficiencies in medical personnel training, and supplies; (3) while many deficiencies noted during the Gulf War remain unaddressed today, DOD has increasingly acknowledged and accepted the urgency of developing a capability to deal with the chemical and biological threat to its forces; (4) both Congress and DOD have acted to provide greater protection for U.S. forces; (5) their actions have resulted in increased funding, and the fielding of more and better chemical and biological defense equipment; (6) DOD must address remaining critical deficiencies if U.S. forces are to be provided with the resources necessary to better protect themselves; (7) DOD is now embarking on a major effort to protect U.S. forces from the threat of the deadly biological agent anthrax; (8) its program to immunize millions of active and reserve forces against anthrax, ensuring that each receives the prescribed vaccinations in the proper time sequence, will be a challenge; and (9) however, if DOD considers lessons learned from previous, smaller-sized immunization programs and from the medical record-keeping errors in the Gulf War and Bosnia in formulating detailed implementation plans, it can reduce the risks and improve the prospects for successfully managing the program.